Toronto is the most multicultural city in the world. According to the 2011 National Household Survey, 46% of the population were foreign-born immigrants and 47% are members of a visible minority. (ref) These immigrants come from a wide variety of places across the globe and their diversity makes the city a truly remarkable place.
I have created a Dot Map that shows a single point for every person in the Toronto area, coloured by visible minority status. There are 5,700,628 in all and they are positioned at their place of residence and coloured based on the information from the 2011 census and National Household Survey. They do not depict actual individual locations but are based on the statistics over small areas.
This first image is zoomed in slightly and shows Toronto with only a few outlying areas. You can see regions of higher and lower population density as well as how the visible minorities are distributed across the city.
You can explore the map in detail with this Zoomable Dot Map of Toronto.
The section below is a close-up of the high-density string of condos along Yonge Street north of HWY 401. You can spot the blank rectangle of the cemetery to the left, the Don river valley, and commercial areas where no people reside.
The next image shows the white, predominantly Italian, area of Woodbridge with the South Asian concentration obvious to the west in Brampton.
It was created with population data from Statistics Canada and map reference data from OpenStreetMap. The OpenStreetMap data was taken from the very helpful Metro Extracts provided by Michal Migurski. The TileMill tool from MapBox was used to compose a map used to mask out non-residential areas and also the basemap underneath the dots. Custom code written with Processing was used to place the actual dots and create the final images. Thanks!
The calls people make into the 311 service line in Toronto give an interesting glimpse into the pulse of the city. The City of Toronto makes this data available through their Open Data initiative. I did some analysis and design work with it to produce a visualization for illuminating time-based patterns during 2012.
The visualization is a set of small multiple calendar heatmaps, one for each data series. The one shown above is for reports about 'long grass and weeds'. I was inspired to use this visual form by this example: Vehicles involved in fatal crashes by Nathan Yau. I experimented with a few different visual methods but this one did the best job of revealing both the seasonal and day of week patterns. I chose to use a unique colour scale for each series in order to maximize the amount of detail.
The image below shows the top 20 most common types of requests. Click on the image to load the full sized version. You can also view all the data series with an interactive version of the Toronto 311 Visualization.
This was created with Processing JS and contains information licensed under the Open Government Licence - Toronto.
One common pattern I see in many interactive applications is to support a person who is selecting a few items from some larger set. Often these items have various characteristics that the person wants to use in some way to guide their selection process. The characteristics can be numeric quantities, dates, categories, or names of things. Showing all the items in a list and allowing the person to sort by one of the attributes is often a decent default solution.
In other cases it's more useful to consider multiple attributes at a time during the selection process. Maybe you want items that are high in one attribute, low in another, and are from a particular category. Ideally the selection process should be one of exploration and successive refinement where various filtering criteria are adjusted until some small subset of items are defined and they can be investigated individually.
I have built an example of this concept which I call the Visual Book Selector. The books are directly represented with small circles and filters can be applied to progressively exclude books by various criteria. The filters are depicted visually as gates through which some of the items can pass and others cannot. The image below shows one possible configuration.
There are about 1000 books which start in the top segment of the display when no filters have been applied. In this example three of the category gates have been opened so books from those categories can pass through. The ones that don't pass this filter pile up near their closed gate which helps give some understanding of their distribution. The books that pass the first criteria encounter a second filter on the average rating of the book from Google Book reviews. This filter gate is set to only allow books having an average rating of at least 4.0 to pass through. The final gate does a pattern match on Author name and allows 4 books to the bottom which have passed all of the criteria.
The best way to get a feel for it is to try out the Visual Book Selector yourself. You can use the dropdown selectors on the left of each segment barrier to choose different criteria on which to filter. Hover over a book to see details and click on it's circle to visit the corresponding Google Books page.
The list of books and their categories comes from the 2009 article in the Guardian 1000 novels everyone must read: the definitive list. The other data was gathered from Google Books.
I should also note that an excellent solution to this multi-attribute selection/exploration problem posed here is the Elastic Lists concept by Moritz Stefaner. It supports what's called Facet Browsing and enhances it with the visualization of proportions and distributions as well as animated transitions.
Recently YouTube had a video that showed all six Star Wars movies at once. They were placed in a 2 by 3 matrix and had an audio track of all the movies superimposed. It was an interesting experiment that has since been removed based on copyright grounds. Before it was removed I was able to do some simple analysis on the video and extract some details of the individual episodes of the Star Wars series.
Basically, I produced something very similar to a classic work called Cinema Redux™ by Brendan Dawes, done in 2004. Each individual movie in the series was reduced to a collection of small snapshots taken at 1 second intervals. The snapshots are layed out 60 images per row so a row corresponds to a minute in the film. These 'fingerprint' images reveal some aspects of the film structure.
Click on any of these images to see higher resolution versions.
I used some fairly simple code in Processing to analyze the video and create the output images.
Last week the wonderful Guardian Datablog published an interesting post called Obesity worldwide: the map of the world's weight. It contains a map that shows with color the rates of obesity around the world. The accompanying chart gives data for different time frames and for both male and female which you can select and view on the map. When I saw the chart I immediately thought of a number of interesting questions that could not be easily answered with the map or chart.
Much of my past work has been driven by personal curiousity. That, together with my own background in science, have shaped my work such that most of it has been exploratory in nature. Recently I have been thinking more about the storytelling or communicative aspect of data visualization. This has been triggered by my admiration for the amazing work of the New York Times Graphics Department, and the writings of Alberto Cairo, Robert Kosara, Andy Kirk, and Jonathan Stray.
I decided try and build an interactive visualization that helped answer the questions above. I also tried to build something that explicitly highlighted some of the more interesting aspects of the data without sacrificing freeform exploration. I settled on using a Slopegraph which was first described by Edward Tufte and is featured on the cover of Cairo's excellent book The Functional Art.
This first image shows the trend for male obesity organized by continent. It's a difficult problem to show labels for so many countries along one axis so I tried to alleviate it by letting the user expand or hide countries by continent group. In this case 'North America' is expanded to show its' individual countries. Labels are only shown if they don't overlap with others. The largest countries by population are placed first.
Individual country lines can be clicked on to emphasize them with colour.
The third example shown below charts female values on the left against male values on the right in order to emphasize gender differences.
The interactive visualization includes a 'stepper' that takes the user through four different views. This helps introduce functionality gradually as well as serving to emphasize important patterns in the data.
In addition to the people and organizations mentioned above I would like to acknowledge the people behind Processing and Processing JS which was used to build the application. The code for the dashed lines comes from J David Eisenberg. Thanks!
In 2006, I started this blog as an outlet for my creative personal work as well as to gather in one place references to interesting work by other people. Since then, Neoformix has grown into a full-time business for me specializing in the development of custom data visualizations. I have just spent some time giving the website it's first facelift in 7 years. I hope you like it!
I've tried to simplify the design and emphasize that Neoformix is a business by designing a main page that highlights some projects and moving the blog to a secondary page. Thanks to Twitter Bootstrap for a powerful front-end framework which I made use of in the redesign.
About five years ago I posted a simple little application called Word Hearts which lets you fill a heart shape with words. Last year it was the most visited page on my site despite the fact that it was still a java applet based application which many modern browsers won't render. I have updated this tool to use ProcessingJS so it runs well in modern browsers. There is also enhanced functionality like:
Here are a couple of examples of what you can do:
Launch the interactive version of Word Hearts to try it out.
I have built another little digital humanities project based on the text of the 62 stories in Grimm's Fairy Tales. This one is called Grimm's Story Metrics and presents an interactive matrix of stories together with various metrics calculated from their text. You can click on a column to sort by that data, click again to reverse the direction, and click on a story name to open it in another window. The image below shows the stories sorted by the 'Royalty' metric which indicates, as you would expect, how many references there are to words related to the topic of royalty. Click on the image to go to the interactive tool.
Hovering over any of the bars shows details about that particular measurement. Most of the metrics, like 'Royalty', are based on topics and the details shown are the words characteristic of that topic used in the story. So, for example, the details for 'Royalty' in the 'Frog-Prince' are princess, prince, king, kingdom which are listed in frequency order. These topical metrics are normalized based on total words in the story so longer stories have no scoring advantage.
The 'Lexical Diversity' is a ratio of the number of unique words in the story to the total words. These stories are fairly short and you can observe a rough inverse relationship between 'Story Length' and 'Lexical Diversity'. 'Clever Hans' is an outlier in this relationship. If you examine the text for this story you'll see that there is a great deal of repitition.
Area of the words reflects frequency in the text. The top three most similar words are considered for connections with the word similarity metric defined by collocation within the text. The outer ring of words only have one weak connection to another word in the graph.
My previous post on the Grimm's Fairy Tale Network showed a graph illustrating the strongest connections between the various stories. I used a few techniques to try and prevent the usual mess of connections that often obscure the relationships of interest.
Another way of tackling graphs with lots of connections is to only show a small portion of the graph at a time and use interaction to provide navigation. This lets you browse around a complex network of nodes and relations and repeatedly get views centered on a node of interest. I've created an example of this for the Grimm's fairy tale data which I call the Grimm Fairy Tale Connection Browser.
The image below shows the connections to the story 'Little Red Riding Hood'. The larger circles are stories and the smaller ones represent key words in the collection. The inner ring shows the words and stories closely connected to the story of interest. The outer ring gives the related stories and words that are related but with less strength. You can click on any story or word to make it the new focus node. Click on the image below to launch the interactive version.
This second example shows the stories and other words highly related to the word 'wolf'. The interactive tool shows the Gutenberg version of the stories in a panel on the right. When a new story is made the central focus of the visualization the right panel shows the story text.
This was created with Processing JS.
I have had some fun playing around analyzing the text of the stories in Grimm's Fairy Tales. There are 62 stories in this set and they contain many popular tales such as Little Red Riding Hood, Snow White, and Rapunzel. The text analyzed is the English translation by Edgar Taylor and Marian Edwardes available at Project Gutenberg.
The graphic below is a simple network showing which stories are connected through the use of a common vocabulary. There are three different strengths of connection shown and I've tried to minimize the usual 'hairball' nature of these types of diagrams by only showing the top three connections for a story. Some stories will have more than three links because the link meets the top-three threshold for the story on the other end of the link. The shade of blue simply indicates the number of connections for that story - the darker the shade the more connections. Click on the image to see a larger version.
The diagram shows in the upper-right corner for example that 'Little Red Riding Hood' is strongly linked to 'The Wolf and the Seven Little Kids'. My analysis shows that the strength of this connection is due to them both using words like wolf, stones, door, belly, scissors, drowned, and devour.
The project 'Novel Views' consists of a series of visualizations of the novel Les Miserables by Victor Hugo. The text analyzed is the English translation by Isabel F. Hapgood available at Project Gutenberg.
This graphic shows where the names of the primary characters are mentioned within the text. Click on any of these images to see larger versions.
Characters are listed from top to bottom in their order of appearance. The horizontal space is segmented into the 5 volumes of the novel. Each volume is subdivided further with a faint line indicating the various books and, finally, small rectangles indicate the chapters within the books. In the 5 volumes there are a total of 48 books and 365 chapters. The height of the small rectangles indicate how frequently that character is mentioned in that particular chapter.
A word used in multiple places in a text can be interpreted as a connection between those locations. Depending on the word itself the connection could be in terms of character, setting, activity, mood, or other aspects of the text. This graphic shows a number of these word connections.
The 365 chapters of the text are shown with small segments on the inner ring of the circle with the first chapter appearing at the top and proceeding clockwise from there. The outer ring shows how the chapters are grouped into books of the novel and the book titles are shown as well. The words in the middle are connected using lines of the same color to the chapters where they are used. The edge bundling technique together with the Volume - Book - Chapter hierarchy of the text are used so the patterns of connections are more easily revealed.
I really like the effect and it's completely automatic which opens up some interesting possibilities. The original base image is by Steve McCurry and is of Sharbat Gula. A retrospective on her life done by National Geographic can be found here.
In my last post about visualizing Movement in Manhattan I mentioned that it would be interesting to explore a more direct view of the data by using an animation. I have created such a video based on a fresh collection of tweets from Monday, April 30th. I gathered new data because I realized that my previous data set was collected over the weekend and I suspected that a weekday might provide more obvious patterns. It compresses 24 hours of data into 1 minute of video. Here it is:
I was influenced by the 'Fireflies' video showing iPhone traces done by Michael Kreil. In particular, I like the idea of using larger but more transparent graphics to represent the increased uncertainty when drawing interpolated locations. Basically, if a person tweets at location A and then again at location B ten minutes later the model I used assumes they moved at a constant speed in a straight line between those two events. This is an obviously crude approximation and leads to unrealistic paths in many cases. By increasing the transparency in between the two measured events it shows this uncertainty in a visual manner.
Again, as I saw in the original version, the patterns of tweets, both moving and static are quite chaotic. You can easily see the rise and fall of tweets over the changing time of day and some local patterns that look interesting but the patterns are still a bit of a jumble.
The geolocated tweets were collected with the library Twitter4J which was used from code written in Processing. I used this tutorial created by Jer Thorp to get started with the library. Code from this flow field sample by Daniel Shiffman was used as a starting point to create my flow maps. The background map is from OpenStreetMap. Thanks everyone!
Inspired by the beautiful and elegant Interactive Wind Map created by Fernanda Viegas and Martin Wattenberg I have begun to explore the flow of people within a city. An ideal dataset to do this would include the GPS traces from thousands of people wearing trackers for weeks as they go about their daily lives. Organizations such as crowdflow.net and OpenPaths collect voluntarily donated data of this type and might be fruitful to explore. I decided, instead, to use geolocated tweets to try and see how the movement of people is affected by the urban landscape.
The image below shows an area of Manhattan roughly from Houston Street north to 72nd Street which corresponded to the region with the most geolocated tweets that I collected. It includes Times Square, Grand Central Station, the Empire State Building, Rockefeller Center, the southern portion of Central Park, and many other well known landmarks. The blue and red markings are an attempt to show the flow of people based on the data.
Basically, tweets sent by the same person within a 4 hour time-window were used as samples of speed and direction. These samples were used to construct a vector field representing the average flow of people within the area. The vector field and total tweet density over the space were then used to simulate the movement of people. Particles, representing people, were released at locations where actual tweets were recorded and their subsequent movement was determined by the flow field. The particles start out blue and gradually change through purple to red over time so each trace shows the direction of movement. Locations where there is little movement will have blue dots or very short blue traces. Longer traces with more red show a greater speed at that point.
The density and direction of the flow patterns seem reasonable but they do appear fairly chaotic - much more so than the patterns seen in wind flow for example. This makes sense for many reasons. One, people are much less deterministic than the molecules that make up the air. Secondly, the environment that they exist in is extremely complex. Also, statistically we are dealing with a much smaller sample size. In this case, roughly 34,000 geolocated tweets with only 9,600 path segments. If we had a million-times more data then the average patterns would be more clear. Another important factor is that this data was collected over a few days and so there may be clear patterns for specific times of day that are mixed together visually.
I have produced three more images that separate out the data by time of day. This first one only uses data from 6-11 am. It does appear to be a bit simpler and shows a few interesting patterns but it is still fairly chaotic. There is a strong flow east out from Central Park near 65th Street. There is also a more scattered flow from the east into New York University near the bottom left.
The afternoon flow map shows a greater overall density indicating a greater number of locations from which people are tweeting. There also appears to be a strong convergence on the area of 14th Street - 4th Avenue.
The evening map is also quite busy with lots of small local patterns. There is heavy action between 50th and 57th Streets. Comparing these three versions is easier with this Flickr lightbox version of the images.
Overall, there are lots of flows and some of them likely reflect real movement of people within Manhattan. Many others probably just reflect noisy data because the sample size is so small. It's difficult to distinguish between the two cases here. The technique itself might warrant further study with more data. Another interesting avenue to explore would be to more directly visualize the data with an animation like this 'Fireflies' video showing iPhone traces done by Michael Kreil.
The geolocated tweets were collected with the library Twitter4J which was used from code written in Processing. I used this tutorial created by Jer Thorp to get started with the library. Code from this flow field sample by Daniel Shiffman was used as a starting point to create my flow maps. The background map is from OpenStreetMap. Thanks everyone!
This is Part 4 of a set of posts related to the analysis of the Data Visualization Field on Twitter. For context or more information you may want to read those other posts first. They are:
In the previous posts we have seen that there are two fairly cohesive subgroups of twitter accounts that emerged from our analysis of the original 1000 accounts. I've been calling them the 'blue' and the 'red'. They were determined by looking exclusively at the references to twitter IDs within the tweets that were sent.
Presumably the fact that there are two fairly distinct groups would also be reflected in what they are discussing. I've done some analysis of the words used within the tweets for both groups. English stop words ('the' , 'and' , 'or', ... ) and other words commonly found in tweets ('new', 'via', 'like', 'day', ...) were excluded. Word clouds definitely have their limitations but I believe they can be an effective way to get a qualitative feel for a body of text. I have used Wordle to construct word clouds for the two groups.
It's clear that the blue group tweets a lot about 'art', 'code', 'design', 'processing', 'project', 'app' and 'workshop'. The red group tweets about 'data', 'visualization', 'design', 'infographic', and 'visual'. There is some overlap for sure but it's clear that they emphasize different things in what they are talking about.
Right from the very start I was calling the whole set of accounts the 'Data Visualization Field'. Of course, a more accurate description was that I was looking at the 'Set of Accounts on Twitter Connected Through Tweet Mentions from @moritz_stefaner, @datavis, @infosthetics, @wiederkehr, @FILWD, @janwillemtulp, @visualisingdata, @jcukier, @mccandelish, @flowingdata, @mslima, @blprnt, @pitchinteractiv, @bestiario140, @eagereyes, @feltron, @stamen, and @thewhyaxis'. It doesn't exactly roll off the tongue. From looking at these word clouds it appears that the red group could reasonably be named 'The Data Visualization Field' and the blue group something like 'Computational Artists and Designers'.
If we want to contrast these two groups more directly we can look for words that are used much more frequently in tweets of one group than the other. I've done this for words that met both an overall frequency threshold and an author support threshold - they were used by at least 10% of the group members. The bar charts show the frequency proportion. So, for example, in the large sample of tweets I looked at from both of the two groups if you count the number of times the word 'makerbot' was used then 99% of those instances were in tweets from people in the blue group.
This shows even more clearly the different things that these two groups emphasize.
The recent post on Data Visualization Field Subgroups had an interesting reaction on Twitter that I didn't expect. Many people that were placed in the 'red group' by the community detection algorithm in Gephi joked about being part of the 'team' and being happy to represent it and be grouped together with the others. Jen Lowe lightheartedly suggested a scrimmage at #eyeo between the red and blue. There was much less reaction from the 'blue group', likely because I'm embedded within the reds myself and so they likely paid more attention to my posts and the subsequent reaction on twitter.
There does, indeed, seem to be two fairly cohesive groups of people here but I suspect there are very many connections between the groups as well. We can use some simple network analysis to get a feel for this. Here are a few statistics calculated on the blue and red groups only:
|Number of Nodes||216||244|
|Total Intergroup links||665||1329|
|Total Intragroup links||5405||5047|
|Percent Intergroup links||10.96%||20.84%|
Both groups are pretty similar in most respects. The primary difference is that blue group members have on average more incoming links and that the percentage of intergroup links going from someone in one group to someone in the other is roughly double for reds. Remember that a link from A to B means that A referenced B in a tweet through a reply, a retweet, or just mentioning them in some context. When considering just the links between these two groups the people in red are referring to the people in blue at twice the rate of the reverse.
If you look at the graph showing both groups together (edges not drawn) it's clear that some nodes, for example blprnt and pitchinteraciv, are on the border between the groups which suggests they likely have a fair number of cross-group connections.
By looking at the details of the connections and their strengths we can quantify the 'blueness' or 'redness' of any particular node. This indicates how embedded they are within their own group. We can also do this separately for both incoming and outgoing links but I'll keep it simple for now and show one value that reflects both types of links together. This first table shows the top blue accounts (by degree) sorted by how 'blue' they really are.
|Blue Account||Degree||Blueness %|
You can see that feltron, blprnt, eyeofestival, and ben_fry are all tending towards the red which matches what we see in the network graphic where they are on the border. This table below shows how 'blue' the top twitter IDs are that were placed in the red group. Again we see that some accounts had significant linkages to the blue group.
There was some interesting discussion yesterday on Twitter about my post on the Data Visualization Field on Twitter. Moritz Stefaner pointed out that he didn't see a big improvement over his VIZoSPHERE and a quite similar topology. Furthermore, he noted that if you rotate my version 90 degrees counter-clockwise many of the primary nodes line up fairly closely with his. He's right, and it's something I missed noticing completely. It's not really surprising that an analysis of most of the same twitter accounts using a different connectedness metric would yield similar results. I do still feel the map based on tweet text account references is slightly better at the detailed local level but I have no objective evidence that this is the case.
Another interesting thing I learned yesterday was that Lynn Cherny did an excellent analysis of Moritz's data back in September which is reported in Combing Through the Infovis Twitter Network Hairball. She focused on the detection of sub-communities within the network using both Gephi and NetworkX and has some nice results.
Following Lynn's lead I have spent some time looking at the communities within my data. Doing this analysis with Gephi yields subgroups that look like this:
The modularity score was .356 which is slightly under the .4 boundary for significance. By visual inspection of the image above it seems clear that there are two coherent groups to the left and four other groups that are intermixed and less clearly defined. These two coherent groups correspond pretty well to what I saw by eye yesterday. The top-left blue group has people who focus on computational design, generative art, or design in general. The bottom-left red group, as I noted yesterday, seem focused more on the practical aspects of data visualization.
Below is a map showing only the blue group. I've also shown the top 3% of edges as well. I wasn't able to emphasize the flows as much as I would have liked but you can see some of the stronger edges and their direction. One of the strongest relationships visible in this map goes from @eyeofestival to @blprnt which indicates that a relatively high fraction of the tweets sent by @eyeofestival mention @blprnt.
Here is the map for the red group below. Note that you can click on any of these images to get PDF versions where you can zoom in or search for a particular account.
I consider myself one small part of a community on Twitter that focuses on information visualization, computational design, and interaction design. Collectively we tweet about our personal work, highlight other work of quality or that has interesting characteristics, critique approaches or individual designs, discuss tools and techniques, and suggest interesting datasets or projects. I'm grateful to be connected with such an interesting group of people and I've learned a great deal from them.
Moritz Stefaner is an important part of this group and in July 2011 he created an interesting map of this community he calls The VIZoSPHERE. Basically, he started from a set of 18 selected twitter accounts, found their friends and followers and included any twitter account that met a minimum criterion of connectedness. A small version of part of this map is below. Node sizes reflect the number of followers within this community.
It's a fairly standard graph view of the network data and the sheer number of connections makes them extremely difficult to traverse. Like many such large network graphs the primary utility seems to come from seeing which nodes are largest and seeing which ones seem to be grouped together, presumably reflecting nodes that have a similar set of connections to the rest of the network or strong connections between them. This can sometimes visually suggest sub-groups within the overall community.
After stumbling across this work recently I decided to explore the same problem myself. Rather than rely on follower information for connectedness I decided to analyze the actual tweets sent and look for mentions of twitter IDs. These could be retweets, replies, or just references to someone in a tweet. For a given twitter account we are essentially looking at who they talk to or talk about. Unlike the binary nature of the follower connections we can also measure the strength of this connection by looking at how often one person mentions another.
I started with the same set of accounts that Moritz used: @moritz_stefaner, @datavis, @infosthetics, @wiederkehr, @FILWD, @janwillemtulp, @visualisingdata, @jcukier, @mccandelish, @flowingdata, @mslima, @blprnt, @pitchinteractiv, @bestiario140, @eagereyes, @feltron, @stamen, and @thewhyaxis. I looked at the 1000 latest tweets (or as many as they had if they hadn't sent 1000) and found all the twitter accounts they mention. For each mentioned account I calculated its' support - the number of accounts in the original 18 that mentioned it and used that ranked list to enlarge my set to 50. The latest 1000 tweets for this larger set were retrieved and analyzed in the same way to enlarge the community to 100. I repeated this once more and used tweets from these 100 accounts to finally get a list of the top 1000.
The total number of tweets analyzed for these 1000 accounts was 821,407 and I used them to determine a directed connection strength between each pair of accounts. This connection data was loaded into Gephi which I used to produce the graph below.
For a searchable and zoomable version use the PDF.
As in Moritz's VIZoSPHERE there were so many connections that I didn't think they provided any useful information that could be seen with the eye so I left them out. They are used to layout the nodes for each account and also the node sizes are determined by the degree - the number of edges coming into or out of the node. The bigger nodes can be read off from this graph - @blprnt, @moritz_stefaner, @flowingdata, @visualizingdata, @janwillemtulp, @infosthetics, @golan, @mariuswatz, @reas, @ben_fry, @brainpicker, @nytimes, @timoreilly. Many of these larger nodes are, unsurprisingly, the original seed accounts we started with.
Looking at the details of which accounts are placed near each other seems to give reasonable results. @Eyeofestival is near @blprnt, @krees near @periscopic, and @mccandelish near @infobeautiful. It's very likely that many nodes are placed near each other based on more global or indirect factors so there are still likely some surprising juxtapositions.
Many of the initial seed accounts are in the lower left part of the diagram and seem to reflect a subgroup focused more on the practical aspects of data visualization. The top left accounts seem more to be in the area of computational design, generative art, or design in general. @Blprnt seems to lie between these 2 subgroups. The right side of the diagram seems to be more general media and data sources. I suspect that many of the accounts on the left side mention those on the right but the reverse is not true. In fact, I suspect that many of the accounts on the right side aren't really part of the community in that they don't strongly interact with it. They are sources but not contributors. It would be interesting to repeat my enlargement process from the original seed accounts with some minimum criterion for two-way interaction.
The nodes are colored based on the total number of incoming links which represent people in this community mentioning that account. The darker the color the more incoming links there are. So there are a lot of different people within this community referring to @blprnt, @flowingdata, @brainpicker and @nytimes for example. You can't extract much quantitative detail from a color range but it does give you a feel for which accounts are highly referenced. Note that the color is based on the absolute number of incoming links - not the proportion of incoming to total links. That would be a more interesting measure but I couldn't easily map it to color with Gephi.
This looks like an interesting view of the data and I'm curious to explore a few related variations. Note that prominence within this graphic is a fairly crude measure of overall contribution to the field of data visualization. Many key figures in the field, Stephen Few for example, don't use twitter and so aren't represented here even though his critiques have a huge impact and are discussed within the twittersphere. Many others, such as Ben Shneiderman (@benbendc) and Edward Tufte (@edwardtufte), do use twitter but not extensively and not to a level that reflects their value to the field. They do appear in this map but have very small bubbles.