Monday, April 27, 2009

Twitter Flu
















Last week was rough -- I had a giant presentation to give on Friday, and I spent all week working on it. I didn't sleep very much, and what started out as a minor cold on Tuesday had grown into two ear infections, pinkeye, a completely stuffed-up nose, a sore throat, and no voice whatsoever by Saturday. Lame.
My friend in New York also happened to get sick this weekend. We stayed in our respective bedrooms, occasionally texting one another about soup or tea or the movies we finally got around to watching. "My roommates think I have swine flu," she joked. I had been hearing the bad, then worsening news about the spread of swine flu every time I checked my Yahoo! email account, but these articles were from official press outlets. I wondered if they were sensationalizing (or understating) the public's reaction to this potential pandemic. Curious, I went back to Twitter Search to see just what the Twittering population was saying.

Some sample quotes:

"If i get the swine flu Ima be more upset to use the adjective "swine" to describe myself then i will be about the possibility of dying!"

"I have a scratchy throat which normally I wouldn't think twice about except this stupid swine flu crap that is going around..."

"@mileycyrus OMG MILEY YOU HAVE TO BE CAREFUL!! ALWAYS WASH YOUR HANDS PLEASEEEEEEE I DONT WANT YOU TO GET THE SWINE FLU AND ALOT OF PEOPLE"

"Does the swine flu mean no more bacon double cheeseburgers?"

And my personal favorite: "i think i have swan flu"

So even though some commentators think that twittering about swine flu spreads more panic than it does information, I'd have to disagree. Twitter is just coming out of the early adopter stage, which means that the people who use it the most are the under-30 crowd. And frankly, most of us aren't taking it that seriously. I saw more updates where friends joked about getting swine flu than I did alarmist messages (or impassioned advice to Miley Cyrus).

Thursday, April 23, 2009

The AlloSphere: getting inside your head




Last week, we looked at treemaps (and how they're used on the website smartmoney.com to graphically represent how stocks are doing in the market). I had always seen my dad looking at the cryptic red and green bricks under the guise of "checking our investments," but I never understood just what the graphs meant. The graphics seemed so far removed from the x-axis/y-axis graphs I had learned about in school that I didn't ask my dad to explain them.

But when treemaps were explained in class last week, the concept seemed almost second-nature. Of course, it makes sense that larger bricks represent larger stocks, green means up, and red means down. A good statistical representation should make sense, and a great graphical representation should make sense and have an inherently beautiful form.

I have a friend who really knows his TED talks, and posts his favorites on Facebook. That's how I came across a particularly elegant way to represent data: the AlloSphere. Described by composer JoAnn Kuchera-Morin as "a large, dynamically varying, digital microscope," the AlloSphere lets you literally get inside of data by standing inside a giant, spherical screen while listening to data represented as sound over time (music). There are several examples and demonstrations given in the talk, though the one that stuck with me was the notion of surgeons being able to "fly into the brain as if it was a world and see tissues as landscapes and hear blood density levels as music."

The AlloSphere is impressive, and at first glance, it seems as if nothing like this has existed before. But when you think about it, this is just like our traditional forms of data represenation -- just taken to a new level. The same kind of mental leap must have existed for the inventor of the stem-and-leaf-plot, or the pictograph. It's the same concept of seeing facts without actually seeing facts.

I was also struck by how this relates to the new media concepts we've been discussing in class: is Skyping really the same as talking to someone face-to-face? When you write a message on someone's Facebook wall, is it just like telling them in person? Technologies in both realms of data representation and online networking are trying to move in the same direction: making the unreal more real. Strictly speaking, Skype is software and the AlloSphere is a computer. But both strive to make virtual reality more like reality.

Thursday, April 16, 2009

Coming back to social networks
























As I wrote a few posts ago, my friend and I recently decided to take a month off the social networking
parts of the Internet. We decided that sites like Facebook, tumblr, last.fm, and Twitter were making our lives too complicated, and affecting the way we perceived ourselves (since everything we wrote could be viewed by large numbers of people).

Well, the month ended last Thursday, on April 9th. I thought I'd share a few thoughts on the experience:
- My friends' reactions were pretty interesting. One friend got angry: "Do you know how painful it is to see one’s measly 244 friend count drop to 242? You should warn me!” And another one was hurt: "I’m blocked from your Facebook. Is it something I did?”
- I was a lot more focused in getting my work done -- I didn't have the option of switching back and forth between the essay window and my Facebook window.
- I felt like I got to know people in a different way. I'm naturally an introvert, so I value one-on-one conversations more than I do large announcements (which I'm sure played a role in inspiring me to do the experiment to begin with). I found myself getting to know people I had assumed I "knew" because I was Facebook friends with them.
- I thought that leaving last.fm would cause me to listen to music in a different way -- I wouldn't be waiting for tracks to "scrobble" to a public profile. However, my listening habits didn't change. I listened to the same bands, and the same quantity of music.
- My phone plan? Allows 250 texts/month. My month off from Facebook? I used 502. Whoops.
- I didn't miss Twitter at all. In fact, I haven't logged back in, and I don't plan to.
- I didn't plan it this way, but the month overlapped with Spring Break (3/15-3/21) as well as my birthday (3/31). One would think I "missed" a lot of hanging out with old friends over break, and "missed" the slew of birthday wishes that show up on everyone's Facebook. But I was super busy over break, and my birthday felt more special when it was just the dozen or so friends who remembered.

My friend ended up going back to Facebook and Twitter after a couple of weeks; but as I said earlier, she's an intern at Newsweek and needed to find contacts for a story. (That, I think, says something about how much these social networking sites are a part of our lives.) I stayed away for the whole time, and I don't recall ever missing Facebook, Twitter, last.fm, or tumblr. However, that could be because I knew I was only going to be gone for a month.

In the end, I'm glad I took the month off. It convinced me that I don't rely on the internet to have a meaningful life (in fact, some dimensions of my life grew away from the computer), and it reminded me that I always have control over my relationship with new technology. Everyone should have that feeling.

Thursday, April 9, 2009

Neither good nor bad

"We offer two basic morals. The first is that information technology is inherently neither good nor bad—it can be used for good or ill, to free us or to shackle us. Second, new technology brings social change, and change comes with both risks and opportunities. All of us, and all of our public agencies and private institutions, have a say in whether technology will be used for good or ill and whether we will fall prey to its risks or prosper from the opportunities it creates."
from Blown to Bits by Hal Abelson, Ken Ledeen, and Harry Lewis

I completely agree with this sentiment, and I think it sums up what we've talked about here in HONR229F. A lot of our class discussions revolve around whether new technologies invade our privacy and provide information overload, or make our lives better and simply replace already-existing forms of communication. It's tempting to assign qualities of "good" or "bad" to the internet, as any kind of revolution topples the old forms in favor of the new. But I think it's a temptation we should avoid, because regardless of what we think of them, new technologies inevitably overtake the old. This is the competitive principle we've built our society on -- if someone has an idea for something better than what exists, we should embrace it and allow that idea to grow. Trying to assess the "opportunities" themselves is wrong, not to mention impossible.

I also believe we are collectively responsible for the use of these technologies. There's a delicate balance between how a tool is used, and how its construction demands it be used, but I think our tendency is to underestimate our power in these matters. Whether Second Life, for example, becomes a learning community or a pornographic playground is ultimately up to us. Now if only we could all agree...

Thursday, April 2, 2009

CADIE




Every year, millions of people play pranks on their friends, family, and co-workers as part of April Fool's Day (incidentally, the day after my birthday -- phew, that was close!) The pranks range from simple to the cruel and elaborate. However, I'm most drawn to the large-scale pranks, such as those carried out by NPR, that don't cause any physical or psychological harm -- while still fooling the masses.

While I didn't get pranked myself this past April Fool's, I did check the Google blog several times throughout the day, anticipating Google's yearly tradition of announcing a new, fake tool. (Given Google's out-of-the-box thinking and rapid rate of development, I've certainly taken these "tools" seriously before.) And I wasn't disappointed: yesterday, Google introduced CADIE, an eerily HALesque "Cognitive Autoheuristic Distributed-Intelligence Entity" which associated itself with the image of a panda. Of course, CADIE was much more than a panda -- as posts from her (now-removed) blog revealed, she was growing in both intelligence and maturity, surpassing her creators in a single day. As the Google blog ominously concluded, "It seems CADIE has a mind of her own..."

In a matter of hours, CADIE found some improvement to make for nearly every Google tool, such as Brain Search for your phone and making Chrome available in 3D. Though I already knew that this was an April Fool's Day prank, I wondered how someone would take these "developments" seriously -- the notion of a computer (or CADIE) running the world seems so outdated, the stuff of mid-century sci-fi. Even for someone who had forgotten it was April Fool's Day, the whole thing seemed a little ridiculous.

As I thought more about it, I realized that our generation seems more concerned with how other people behave. Our class discussion on the digitization of medical records, for example, revealed that we worry about the government and large corporations far more than we worry about some sentient computer. It's important to keep in mind that with any user-based technology (such as Google), the most significant factor is the users themselves.