This project would not have been possible with out the hard work of those who created all the publicly available open datasets online. Thank you for sharing your data with us all!
The daily visualization project originally started as a way for me (as a user) to better understand the capabilities of the data visualization design platform, Datavisual (http://datavisu.al), that I have been developing for the last four years. I researched, designed and published a data visualization every day for all of 2017 on topics that were relevant for each particular day. This personal project quickly turned into a platform for data and design reportage, a place to counter ‘alternative facts’ with actual facts gathered from established and reputable sources.
What makes this project innovative?
Researching, analyzing, designing and publishing a chart everyday based on current events was quite challenging but very rewarding. This project allowed me to better understand the facts behind the headlines, share these truths with the world and help fight against those keen to ignore facts in this post-truth world on a daily basis.
What was the impact of your project? How did you measure it?
All of my visualizations were published on multiple platforms (Facebook, Tumblr, Instagram, Linkedin and Datavisual). This allowed me to see which visualization really resinated with that particular audience. At the end of 2017 I was able to analyze which visualizations were more popular based on the platform they were published on and was fascinated to find that there was very little overlap.
Source and methodology
Since each visualization was based on a different topic I practiced many different methods for researching, collecting and analyzing data. For instance when visualizing Trump’s twitter habits I scrapped data directly from the Twitter API, for the many natural disasters we experienced I found data on government and scientific websites, and crime data was taken from the FBI database as well as local law enforcement databases. To promote transparency and honesty I always included a source with a direct link to the dataset.
The majority of the data was taken from open datasets on government websites, annual reports for corporations, and data collected by other media outlets. For web scrapping I used Python, for Twitter data collection I used Python and Tweepy, for PDF conversion I used Tabula, and for geo-coding data I used Python and Google Maps API. The analysis was done mainly with Excel. All the designs were created using Datavisual.