Eran's blog

Your UI should on My side

There’s been much hoopla over Facebook’s latest redesign. Personally, I kinda of like it, although many do not. I consider that to be a matter of taste and preference and maybe being used to how things used to be. In a few months this will be so far gone you won’t even remember it was a problem.

What I do not like is some of the recent changes they’ve made where pieces of the UI that used to contain recommendations for new friends (based, I’m guessing, on factors like common friends and location) now contain what appear to be random endorsements for public pages Facebook thinks I should be a Fan of for no apparent reason (at least to me).

katy perry
I don’t even know who Katy Perry is and I’m sure as hell am not becoming her fan.

My point is, if Facebook wants me to find their UI useful they should keep it consistent. Not just in the basic UI sense where things should be where I expect them to be but also in the content sense. Placing paid content where I expect real content will only fool me for a very short time. As soon as I realize the truth, I’ll start treating that entire space as an ad and completely ignore it. This makes it a lose-lose for both me and Facebook.

I lose because that space used to contain somewhat useful information that I wont see anymore but more importantly, Facebook loses because the information that was there used to make me create new connections with people, therefore making the social network more rich, therefore making Facebook more valuable.

I understand that Facebook is trying to appear more valuable to brands but hijacking my interface is not the way to do it. It is, on the other hand, the way to make me look for another interface or another network altogether.


Filed under: General, Social Software, , , , ,

How NOT to build your brand

Here’s why my friends at Get Satisfaction say that Customer service is the new marketing.

AT&T has completely dropped the ball here at SXSW’09. All those geeks running around with their iPhones have brought the network to its knees. You can barely get basic cell service anywhere around the convention center and around any large party. Here, Google will tell you all about it.

On the other hand, SXSW organizers and the Austin Convention Center completely rocked the Wifi setup. For the most part we’ve been getting great service for all the Wifi enabled smart phone and laptops running around the convention center. iPhones included! Twitter will tell you all about that.

The lesson here: Giving bad service to a couple thousand of hyper-connected technology influentials is not how you build your brand.

Filed under: General, Mobile, , , , , , , , ,

First Impressions – Kindle 2

I decided to give in to my gadget lust and bought the Kindle 2
as soon as I heard about it. It came in today and here are my first impressions of the device so far:

  • The device itself feels very nice to hold: solid yet light (but not too light). The screen looks pretty good as well.
  • The screen flashes when you scroll through pages. I guess that’s how the e-ink works. Slightly annoying but I’m already getting used to it.
  • The layout of the control buttons is pretty good. It would’ve been nice to have a left side and right side buttons for scrolling back and forward but the current setup is good enough. The 5 way scroll button feels somewhat cheaply made and not as accurate as one would hope but definitely is usable.
  • The keys on the keyboard are nice enough to use but the layout could use some help. My fingers are use to the different rows being slightly offset from each other and the boxy design for the kindle keyboard will take some getting used to.
  • The basic web browser renders websites pretty well but I don’t think I’d be using it for anything complicated. Sticking to mobile versions of google apps and other websites for now. Gmail, reader, calendar all have mobile versions that work pretty well on the kindle 2.
  • Downloading books is fun and easy. Got some Doctorow from manybooks.net (use mnybks.net on the kindle), downloads run in the background while you keep on browsing and the books end up on your Home screen.
  • Reading books is also fun. Brings me back to the days of reading eBooks on my Palm Vx. The kindle remembers exactly where you were and will drop right back there when you turn it back on or return to the same book.
  • Immediate access to the dictionary when you point at a word is a very cool feature. A little window pops up at the bottom of the screen with the definition and a click lets you focus on that and switch to the full definition in the pre-installed dictionary.
  • The annotation mechanism looks pretty nice but I haven’t played around with it much. You can add annotations to specific text sections, you can highlight text and you can collect text-clippings. All seem very useful for research oriented tasks. It would’ve been nice if the clippings file actually contained links to the source text instead of just a textual reference. I’ll have to see how much of a pain that is when I start using this more.

The one thing that would make this a truly awesome device? A touch screen. Scroll using motions, mark text with your finger or a stylus, click through to annotations, etc. Make e-paper feel more like paper.

Filed under: General, , , ,

On News Feeds

Justin Smith on Inside Facebook lays the business case for Facebook connect: Selling increased visibility in news feeds as a way to generate traffic to your Connect-ed website. Facebook’s news feed was always based on some hidden, inscrutable algorithm that offers very little control either to the users reading the feeds or to the applications publishing stories. Instead decisions on what stories show up in your feed are made by Facebook with some basic feedback from the user. This results in a confused, disordered, sometimes repetitive feed with stories that are sometimes days old being folded in together with up to date information and some stories (possibly more relevant to the user) never showing up.

This is where twitter’s “transcendent clarity” trumps Facebook. I know that every twitt published by my friends will show up in my feeds (modulo bugs and outages, of course) and mostly in the correct order. I know that stories that are more important to me (@replies and d messages) will be captured in their own feeds so I’ll be sure to see them. I can decide for myself how much attention to pay to each twitter or to twitter in general. Sometimes I’ll read every single twitt and reply to a couple, but often enough, I just skim unread twitts for anything that looks interesting.

Of course, with twitter’s API I can build additional services to cut and slice that feed to my liking. On Facebook, even with platform, I have very limited access to the feed and as far as I can tell, there’s not even an RSS version of my news feed. For my money, I’d say that if Facebook wants to take over or replace the Web they should learn more from it. What made the Web and the Internet so successful is openness – Open standards and Open software. It’s also what made twitter’s news feed an amazing success despite Facebook’s attempts to co-opt that feature.

Filed under: Social Software

Bulk Email Tips and Tricks

DomainKeys authenticationDomainKeys is designed to verify the Email sender’s domain. Most ISPs out there seem to have moved on to DKIM (Domain Keys Identified Mail) but not Yahoo. To get Yahoo to authenticate your DK signed messages you need to include a DomainKeys signature. You can actually get both DomainKeys and DKIM together with DKIMProxy but the documentation is slightly out of date.

To get DKIMProxy to sign using DomainKeys in addition or instead of DKIM follow the instructions for setting up a DKIMProxy_out.conf file on Brandon Checketts’ web site. You may also find his DKIM Validator to be useful.

Sending Email on Behalf – Sometimes you want the sender to be someone other than yourself. For example when users send invites through your website you may want the From field to show the sender’s Email address and name. This requires understanding some SMTP subtleties (envelope sender vs. sender header vs. from header) but there’s some good examples on how to do this on the openSPF website.

Sending from Non-standard ports using JavaMail – If you’re using DKIMProxy you may end up sending email through port 587 (based on the recommended setup). Java makes it a but more complicated than it should be but I found a good bit of sample code here. This boils down to something like:

Transport tr = new SMTPTransport(session, new URLName(smtpHost));
tr.connect(smtpHost, smtpPort, username, password);
tr.sendMessage(msg, recipients);

Calling the connect method yourself is the important part. It seems that otherwise SMTPTransport will use port 25 even if you specify differently in the session’s properties.

Filed under: Java, The Net, , , , , , , , , ,

AvaPeeps Update

AvaPeeps: FlirtNation has been progressing quite rapidly. In the last few months since we’ve launched on the web we’ve been hard at work, streamlining the user experience, making it easier to join and more fun to play.

Mobile and Web still offer quite different experiences and creating a game that works well on both and is fun and engaging on both platforms is an interesting challenge. The last couple of updates focused mostly on making the user experience on the web more fun, taking advantage of more screen real-estate and better UI options. It shows too, the game flows much better and we see more people enjoying it every day. We’ve also been noticed by the blogosphere. Inside Social Games, writes:

What is most interesting, however, is that AvaPeeps was originally a mobile game that migrated to the web space (rather than the other way around). That said, you are also capable of playing the game using T-Mobile, BOOST, Virgin, or Three (in the UK). Mobile games isn’t particularly unusual, but in this case, the game has been tied to the web version as well, allowing players to interact with each other in real time, from anywhere, regardless of whether or not they are using the web or a mobile device.

We’ve got plenty more planned for the next few months, so stay tuned!

Filed under: Projects, , , , , ,

AvaPeeps.com Launched!

Finally, after lots of hard work by many talented people AvaPeeps.com is live. Still in beta but at least it’s out there. AvaPeeps: Flirt Nation is our (by which I mean, Digital Chocolate’s) avatar based dating game. It’s a lot of fun and a cool place to meet new people. It’s also one of the first (if not the first) cross platform social games to connect Mobile and Web audiences through a shared experience.

In AvaPeeps you get to create a customizable avatar to represent you in the complex world of flirting and dating. You then get to send your avatar to various hangouts (you’ll probably find me at The Mosh Pit) to try and hook up with hotties of your preferred gender. Next, of course, comes dating. Let your peep know what you think is the perfect date and see then see the story unfold. Along the way you’ll get to make friends and exchange messages with the people behind the peeps.

Working on AvaPeeps has been (and still is) an interesting challenge. Creating an engaging experience that translates well both to mobile and the web is not easy. We’ve faced many challenges on many fronts but I think we’ve come up with a pretty good product so far. I’m looking forward to see how it evolves further and wider to live on even more platforms and interact more with the rest of the world.

Filed under: gaming, , , , , ,

Life, Online and Offline

Just finished reading On and Off the ‘Net: Scales for Social Capital in an Online Era by Dmitri Williams. As the title suggets, the article describes a new measure for Social Capital online and offline. An interesting read despite being a little technical (not “my kind” of technical, rather statistics and social science technical). Looking over the list of questions for measuring social capital online and/or offline I found myself thinking that I can’t really answer those questions properly anymore. It’s not that I’m lacking in a social life, quite the opposite actually, but my online and offline lives have almost completely merged (and in some cases flipped) since I moved to San Francisco.

When I lived in Israel my online life consisted of friends in the US that I connected to via IRC, Email, IM, etc. and offline friends and family that I saw and interacted with in person on a regular basis. Now those two have flipped and some of the people I was closest to in RL became online-only entities (except for when I visit back home). That’s interesting but somewhat expected considering that I flipped my life around by moving half way around the world. What’s more interesting to me is the merging of my online and offline life here in SF.

I have two main clusters in my social network. One is a group of friends that mostly formed online on tribe.net and then became a strongly connected real-life urban tribe. The other is a group I met initially in real-life (if you can call Web 2.0 parties real, that is) and I now experience on a daily basis via twitter, blogs, IM, etc. I am still a part of both groups both online and offline which makes separating my online and offline lives pretty hard. @rk and Ryan are the same person and the same is true for almost everyone else I know; there’s nowhere to draw a line.

It might be just me; I am, after all, mostly an introvert and I don’t meet new people very easily. I’ve pretty much saturated my social capacity so I don’t participate too much in online or offline activities where I can form new weak-ties (aka networking) also my friends and I are mostly very comfortable with technology and use it to communicate constantly. But I cannot help but think about the disconnect described in the beginning of the paper describing social researchers who failed to recognize that the Internet can be used for (and is indeed a hotbed of) social activity and see how a similar shift might be occurring now.

To me, the Internet is just another way to communicate. In many ways it’s a more efficient way to organize my social life, coordinate with friends and keep tabs on what’s going on in around my social circle. Separating my online life from my offline life is a futile effort and just doesn’t feel right. It’s all just Life and it’s all Real.

Filed under: Social Software, , ,

Twitter Killfile

Now that everyone’s gone to SXSWi and left me here, alone in the Bay Area with nothing to do and all this parking (not really, there’s still no parking) I finally have some time to be productive. To make sure that my twitter stream remains somewhat useful for the next 10 days or so I created this small Greasemonkey script to filter out all that South By Noise(tm). It might not be useful if you use twitter from some desktop client or IM but if you’re like me and you’re still using Firefox this may actually be useful.


  • Yes, you need Greasemonkey to run this.
  • Yes, it probably has bugs.
  • Yes, it may cause your computer to implode and your entire online identity to be sucked into the ensuing vortex. Use at own risk!
  • Last, I based the script on the Reddit Content Filter script by pabs. So, thanks dude!

Oh, install the script and add banned words and authors using Tools > Greasemonkey > User Script Commands > Edit Blocked (Users/Keywords)…

Update: Slight bug fix. Latest version is 0.21

Filed under: Projects, , , , ,

Trials, Tribulations and Garbage Collection

Note: I am by no means an expert on Java Garbage collection. What follows is a description of my attempts to battle memory and gc related issue in our server. If you think I’ve been especially thick about something, missed some obvious options or went completely off-track, please let me know. Also a warning, this one’s kinda long 🙂

During our latest bout of long-running, high-load endurance tests we’ve identified Garbage Collection as a main obstacle. The pattern observed after about 2-3 days of running at high load was very frequent full garbage collection cycles, each causing a full stop of the JVM for several seconds. This of course led to very slow response times from the affected servers. Over the last week we set out to improve the gc behavior in our application.

The Application

  • A highly concurrent web application running on tomcat 5.5 and Java 5.0 on multiple app servers.
  • Using Hibernate to access a MySQL database with a distributed EhCache as 2nd level cache.
  • Most requests are very short lived, less than 100ms response time.
  • Some background processing, in-memory caching of frequently referenced objects and some shared objects pools.
  • Running on dedicated dual core servers with 8GB RAM.

Trials, Tribulations and Garbage Collection.

Short response times are important so we’ve been using the low-pause garbage collector (also known as Concurrent Mark & Sweep or CMS) quite efficiently for shorter runs (around 2 days). Things started falling apart after a longer run that did not employ second level caching. My assumption at the time was that CMS is not particularly efficient at this sort of situation (a large number of short lived objects and a small number of longer lived ones) and so my first step was to re-enable second level caching.

With the cache enabled we did see an improvement in application behavior but there was still very clear degradation in behavior. Most notably the amount of memory freed in Old at every cycle of tenured generation GC was getting smaller and smaller and cycles were getting closer together. CMS does most of it’s work concurrently except for two point during which all mutator threads are paused while the GC marks all live objects. Seeing these cycles happen closer and closer together meant that more and more time was lost to GC and more requests were likely to be delayed for 3-5 seconds while GC was taking over. My first goal then was to figure out why those tenured generation collections were happening so often.

Fortunately the EhCache page about Garbage Collection contained some useful information: Use XX:+DisableExplicitGC to disable any libraries calling system.gc() and forcing full GC cycles. There is a known problem when using RMI in java (pre 6.0 at least) where RMI causes full garbage collection every minute by default. To remedy that set the following parameters to larger values (both are in ms and default to 60000 or 1 minute)

  • -Dsun.rmi.dgc.client.gcInterval
  • -Dsun.rmi.dgc.server.gcInterval

These changes made a noticeable improvement but tenured GC cycles were still happening more and more often. I started digging around some more and learned that CMS starts to collect the tenured GC whenever the Old generation gets to be around 65-68% full. Now things started to make sense. With our GC becoming less efficient over time (possibly because of a memory leak, possibly not), the occupancy rate of Old was creeping up over time and so it was hitting that 65% threshold more and more often causing GC cycles more and more often.

Now that I had a better understanding of what was going on I could started looking at some of our GC data (collected using -verbose:gc, -XX:+PrintGCDetails, -XX:+PrintGCTimeStamps and jstat -gccause). The first thing that struck me was that the size of our new generation was way too small for our requirements. It appears that the default values for CMS are heavily weighted toward the Old generation. I started experimenting with -XX:NewSize and -XX:MaxNewSize to explicitly allocate more heap to the New generation. I also started setting the total heap size larger.

We started at 3GB total heap (on 8GB RAM machines running nothing else but tomcat) and things seems to run much better at 6GB total heap and 1.5GB allocated to New. At the very least the additional memory bought us some time before we hit the really Bad Times(tm).

I now started looking at slowing down the rate at which the Old generation was filling up. My initial thought was to try and get less objects to tenure at each Young Generation collection. To do that I increased the size of the survivor space (-XX:SurvivorRatio=8 to allocate 10% of New to each of the survivor spaces) and started tracking the age distribution at which objects in New got tenured using +PrintTenuringDistribution. After some toying around with the parameters I got to see some objects live up to age 2 or 3 (initially the maximum age I could see was 1) but I still could not see much of a change in the rate of increase in Old.

We eventually made some sense out of it based on the argument that most of our objects are either very short lived (1 transaction length, around 100ms) and would get reaped right away or very long lived (either pooled or cached) and therefore should be tenured as soon as possible. Based on that reasoning and the VOIP related case study outlined here we’ve settled on immediate tenuring of all objects that survive one Young Generation GC cycle. To achieve that we’ve set -XX:MaxTenuringThreshold=0 and -XX:SurvivorRatio=1024 (the later just to save some memory as survivor space will no longer be used at all).

At this point things settled down a bit and I turned to some fine tuning. Playing around with the size of New space it seemed that we get a good balance of GC frequency and pause length with about 1024m dedicated to new. This kept Young Generation GC frequency to around 4-5s and Tenured Generation between 1.5 and 4 minutes with about 3-5s pauses depending on load.

We made a couple more attempts to fight the slow increase in the size of Old.

  • -XX:+UseCMSCompactAtFullCollection to force compaction of memory at GC time based on the assumption that we may be losing available memory to fragmentation (fragmentation is a known problem with CMS). After running with that setting for a while it seemed to make absolutely no difference.
  • Reducing load on the servers for an extended period of time. We wanted to see if we were encountering some inefficiency in Java’s GC (at least with the parameters we have and under high load). After increasing the load back to initial we saw the usage levels of the Old generation go almost immediately back to where they were before the “rest period” and continue to increase in the same rate.

Based on those two experiments our current assumption is that this slow creep is due to a memory leak. We’ll have to confirm that by some deeper profiling of the application.

Related Parameters and Tools

  • -XX:NewSize and -XX:MaxNewSize – Explicitly set the total size of the New Generation (Eden + 2 survivor spaces).
  • -XX:SurvivorRatio – Set the ratio between the size of Eden and Survivor.
  • -XX:MaxTenuringThreshold – Set the maximum number of times objects will move between Eden and survivor space before being tenured.
  • -XX:+DisableExplicitGC – Disable explicit calls to system.gc() which may be hidden away in some library.
  • -Dsun.rmi.dgc.client.gcInterval and -Dsun.rmi.dgc.server.gcInterval set the interval between full GC cycles caused by RMI.
  • -verbose:gc, -XX:+PrintGCDetails, -XX:+PrintGCTimeStamps and +PrintTenuringDistribution – Useful for tracking GC behavior in your application.
  • Jstat – when used with -gccause or -gc is another great way to track GC behavior. My favorite command line for jstat:
    • jstat -gccause -h10 1s

Useful resources

Filed under: Java, , , , , ,