Participating in the Billion Graves Project

Billion Graves is as project that crowdsources the photography of gravestones and grave sites around the world.

Today, I became a participant in that project when I pulled into a Catholic cemetery off of Route 7, stepped out from the comparative warmth of my Honda CRV and into the bitter, freezing, frosty cold of early January. Public schools had been closed due to the day being “too cold”, since buses would not start in the morning, there was ice on the ground, and other hazards. Just to give you an idea of how chilly.

I stepped out of the car with my snow boots and iPhone in hand, with the Billion Graves app newly installed. Billion Graves provides an easy-to-use iPhone app that you download, pick or add a cemetery, and go about photographing headstones in that cemetery. Once you’ve completed taking clear photographs, you upload them in the app and they appear under the section of whatever cemetery you were recording. Some cemeteries in Northern Virginia, quite a few actually, have hundreds of photographs. The one that I was photographing had a handful.

I trudged through the main sidewalk that cut through the entire small cemetery and took pictures of about ten graves on each side before stomping through the snow to some of the more off-the-beaten path graves. Whether or not there were sidewalks that led to these other areas I can’t say, because the ground was covered in snow. I ended up taking a total of forty grave photographs, focusing on stones that had clearly spelled out names and family members. Many of the graves were of husbands and wives together. Many of them were of family members with the same last name who appear to have died with decades in between. I made sure to crouch down, snow boots and all, to get the entire grave transcription, name and dates in the image. My images ended up being fairly cripsy and easy to read.

After taking my photographs, I headed back to my car to upload the photos through the app. They were successfully uploaded, and appeared under my Billion Graves profile as contributions.

I did hit a little snafu in my contribution to Billion Graves. The cemetery that I was walking in and photographing did not seem to be among those listed in Billion Graves’ automatic listing of cemeteries, so I added a new cemetery on the app and named it “St. Athanasius Church”, since that was the name of the church overlooking the cemetery. I added my device’s location as the address and went about my business photographing headstones. After walking for a while and adding pictures, the app began to automatically organize my pictures into the category of “St. Andrews Cemetery”, which apparently is the name of the cemetery. This name wasn’t listed on any signs, so I just went with the name of the church, and now Billion Graves has two entries that are side-by-side. Now that it is a big deal, the app makes it pretty clear that the church and cemetery are right next to each other, so I don’t think there should be any confusion. Editing the name of St. Athanasius didn’t work out either, as this seems to be on the Billion Graves app side and not accessible to users of the app once they’ve submitted a new cemetery.

EDIT: As I check on the status of my uploaded photographs, they don’t appear in my Billion Graves website account, they only appear as uploaded on my Billion Graves iPhone app. Whether this is a syncing issue between their mobile app and web app or something else I don’t know, but that seems to be a plausible explanation for why the photos appear as synced on my app but not on their website. I would like to have access to my uploaded photographs on the website, but it seems like I’ll have to keep checking back until they appear.

Aside from that small blip, I’m glad to have made a contribution to the project. I was surprised at how no one I talked to about the project seemed to find it odd that I would be going about taking snapshots of the grave stones of people I have no relationship with. I didn’t think it was strange either, so I’m glad that that seems to be the default reaction.

People like to connect with their roots. Not necessarily all people, but many people. The popularity of web applications like Ancestry.com gives some idea of how interesting many people find examining their genealogical and historical roots to be.

For me, I was more interested in the larger human story outside of any one family. The fact that we periodically lose such incredible people forever seems unnecessary. It seems like there really must be a better way. I’m heartened by the fact that many efforts at advancing anti-aging and geriatric medicine have started to enter the commercial mainstream. Companies like Google’s subsidiary Calico and a whole slew of smaller gerontology biotech companies seem to feel like aging, degeneration and death don’t need to be something that we as a society just accept slowly.

As I was trudging back to my car in the freezing cold I passed a gravestone that had no name listed on it, but had instead a paragraph long poem. It was extremely cold, and there was no name on the headstone, so I quickly read the poem and kept on walking without snapping a photo. The poem was basically a paragraph long description of the departed’s final years on Earth. How their family watched as their loved one became ill, and the family struggled to hold on and to help their loved one heal, but eventually the illness lead to frailty and eventually death. The family fought that illness till the end, without success.

And the closing line…

“He only takes the best!”

The capitalized “He” being a religious reference.

As our society continues to march forward and navigate the wilderness of the human body with the map of science and technology, we can only hope that families such as that one will have better equipment with which to fight the illnesses of their loved ones. Maybe they’ll win next time and keep a beloved member of their family around for a while longer. If some radicals in the field of geriatric science and high technology have their way, maybe they’ll keep them around for a long, long while longer.

Ray Kurzweil, a recipient of The National Medal of Technology and director of the Google Brain project, is even making efforts to preserve artifacts of his late father’s life, believing that at some point in his life (Kurzweil is 63 years old) technology will be have become sophisticated and powerful enough to allow him to reconstruct his father based on these artifacts. From an article in ABC news online:

Kurzweil’s father, an orchestra conductor, has been gone for more than 40 years.

However, the 63-year-old inventor has been gathering boxes of letters, documents and photos in his Newton, Mass., home with the hopes of one day being able to create an avatar, or a virtual computer replica, of his late father. The avatar will be programmed to know everything about Kurzweil’s father’s past, and will think like his father used to, if all goes according to plan.

That’s an interesting quote for historians, especially digital historians, because you’ve got someone with a pretty well-established track record in technology openly saying that if you’ve got these historical records, these documents and correspondences and photographs, then you’ve got at least a somewhat reliable image of who the person was. It may be that, as technology continues to progress, the material historians assemble will be the templates for the construction of interactive avatars and places. We’re already doing this with our Online Exhibit project in this class. As time goes on, these exhibits may just become more immersive.

People will go to great lengths to prevent the death of the people they love. It’s a loss that I came face-to-face with as I walked through that cemetery and recorded the sites of the final location in so many people’s lives. One thing I did notice as I was photographing this cemetery, which had graves dating to the time of the Civil War and possibly farther back, that the average span seemed to be increasing. That’s not a scientific analysis obviously, but it was typical to see people born in the 1800’s dying at 60 or 64 or something similar, while people born in 1930 and later regularly had 80 to 90 year life spans. Let’s keep that going.

Advertisements

The Theology of Liebniz, and the Questionable Value of Some Works of Philosophy

I’m on Liebniz-Translations.com, reading an article by Liebniz entitled On the Incarnation of God, Or, On the Hypostatic Union, and I’m struck by what can only be described as the ill-informed, uselessness of the document.

This is obviously not a complaint against Liebniz, or of his method of reasoning, it’s more of a lament that thinkers in his time had to make due with the understanding of the world that they had at that time.

One of the more cogent thoughts in the piece is:

It should be noted that even though mind does not act continually on body, it still thinks.

How do we reconcile thoughts like this in the face of modern molecular biology and neuroscience? Where the mind is understood, at least on a biological level, to have a physical, chemical basis in biological molecules like proteins and other supporting organic chemistry, and the “body”, meaning, I’m guessing, anything besides the brain.

Nowadays we have almost entirely lost the distinction between body and mind. The brain is an organ made of the same basic constituent, the cell, that the rest of the body is. I’m aware that there is “neurophilosophy” now that deals with issues concerning body and mind that makes great efforts to incorporate modern neuroscience, and indeed look to modern neuroscience, in it’s philosophical reasoning.

Do we take these new scientific results and try to “update” the the work of the philosophers of a time past? The representatives of our intellectual history and legacy? Maybe there is truth in there. Maybe they are useful ideas that can be used by modern scientists, historians and philosophers to understand and inquire about more timely matters.

In the human body, for example, it should not be thought that the soul is hypostatically united to all the corpuscles in it since they are constantly in passage; instead, the soul inheres in the very centre of the brain, to a certain fixed and inseparable flower of substance which is most subtly mobile in the centre of the animal spirits, and it is substantially united so that it is not separated even by death.

Interesting, and I do acknowledge and appreciate that almost four hundred years ago, some leading thinkers had already begun to acknowledge the total centrality of the brain in human thought. Looks good. Until:

Therefore the bodies of the possessed are not united to the demon, because their bodies are not inseparably united to it; instead it acts on them only by means of its own body.

Hm. Important contributions by Liebniz here to the field of demonology.

We might excuse him here. Some might even say “Oh, you see, what Liebniz means here is that ‘demons’, meaning the sins of addiction and sloth and the like are not permanently attached or ‘united’ to a person, and that they’re just acting on pure souls from a distance, and they can be removed totally and forever, and rendered separate.” That’s a nice thought! I would support that kind of thinking. But I think the guy is just talking about demons. Which is fine. This was a  while ago. Newton, from what I understand, had an interest in the occult. All of these things fell under the umbrella of “the unknown” at that time and it’s easy to see how they would fascinate curious people.

That being said, I do understand that it was a time past, and that there was not as much scientifically sound information available on which to base philosophical arguments. You take the bad with the good, I guess, but it does make me wonder whether it is really worthwhile to hold in such high esteem all of the works of some of these leading lights.

It’s fine actually. It’s a part of our intellectual history.

WiIlhelm Gottfried Liebniz

Time to research Liebniz! I’ll be translating a small part of one of Liebniz’s works from the original Latin, German or French into English, and will be comparing my results to one of the more established translations, found at Liebniz-translations.com.

Since Liebniz’s written works make up such a large collection, and most of them are not available in English, I’ll need to content myself with a small subset of his total works.

Some of the concepts explored by Liebniz that I think would be a good fit for my project are as follows:

-Binary arithmetic.

-symbolic logic and his concept of a universal language of logic.

I like that Liebniz came late to math. From wiki:

“Because Leibniz was a mathematical novice when he first wrote about the characteristic, at first he did not conceive it as an algebra but rather as a universal language or script. ”

As a twenty-four year old guy with less math in my background than many high school students, who is now going back to college for an engineering degree that requires four years of math, it is inspiring to see a major figure in mathematics come to the field at a similar age. Of course Liebniz was a pretty talented kid when it came to math, but he wasn’t some towering child genius like Euler or Terrence Tao. I love math, and one of the other things I like about Liebniz is his connection of methods of philosophy like logic to mathematics. It seems he feels it is possible to express any concept mathematically, and if you can express something mathematically, can you represent it programmatically?

In that quoted passage, ‘characteristic’ is referring to a notion of his known as:

characteristica universalis or “universal characteristic”, built on an alphabet of human thought in which each fundamental concept would be represented by a unique “real” character:

It is obvious that if we could find characters or signs suited for expressing all our thoughts as clearly and as exactly as arithmetic expresses numbers or geometry expresses lines, we could do in all matters insofar as they are subject to reasoning all that we can do in arithmetic and geometry. For all investigations which depend on reasoning would be carried out by transposing these characters and by a species of calculus.[52] – The Lieb himself

For my part, I do think that Liebniz’s thoughts on logic and human thought fit very well with the logic of software.

How?

According to Wikipedia, and I’m relying on Wiki heavily here because Liebniz’s works are so large that I’ll need to get a quick overview before delving into a couple of chosen works, Liebniz’s principles of logic reduce to two fundamental ideas:

1. All our ideas are compounded from a very small number of simple ideas, which form the alphabet of human thought.

2. Complex ideas proceed from these simple ideas by a uniform and symmetrical combination, analogous to arithmetical multiplication.

Modern neuroscience has dispelled nice notions of “alphabets of human thought” and the notion of a small number of reducible, simple human thoughts that somehow combine to form all complex thoughts. Perhaps philosophy will find a way to accommodate the colossal advances in neuroscience. The philosophies of Liebniz and Descartes were built with the most rudimentary information. Even information about basic biology was not available to inform their speculations.

This sounds suspiciously like software Liebniz.

Consider the fundamental statements of a programming language like JavaScript:

Conditional statements 

if 

if some condition is true (like 1 > 0) then do something

Looping statements 

while

if some condition is true at a given time, do something while its true

for

if some condition is true, do something while it’s true, but also record what you’re doing and use the recording to determine when you should stop

Disruptive statements 

break

stop doing what you’re doing. if you’re looping, stop looping.

return 

stop doing what you’re doing and send back any information you have.

Consider this snippet of JavaScript code (some syntax has been omitted to focus on the logic):

a = 1

b = 2

if a < b

a + 1

And we end up with being 2, and being equal to b.

That’s a nice little simple snippet, but it gets the point across. Software is the realization of Liebniz’s system of logic!! There are just a small handful of possible fundamental ideas, like “if”, “for”, “while”, and “break” that must be combined in logical ways in order to develop more complex behavior.

It isn’t humans then, that construct complex ideas based on a handful of simple ideas, it is software. Liebniz even built a mechanical calculator. In fact, his mechanical calculator was the first (to our current knowledge) that was produced in quantity. He was admitted to the Royal Society in part because of that invention.

What would Big Lieb say about our modern state of software engineering and logic and the total proliferation of ultra-powerful (by his standard) computing machines? Would he feel that this age is the age when his loftiest dream could be realized? The dream of the creation of a logical machine that would use systems of reasoning to settle all disputes. What exactly such a machine would be constructed of and how it would work is probably getting some borderline science fiction computing, but it’s probably safe to say that such a machine is within the realm of plausibility for people in our time, although it looks like the symbolic logical language that Liebniz required to make this machine work doesn’t really exist for us. We’ve got math, and we’ve got regular old language. But no universal symbolic conceptual language where every human thought is represented by a symbol. If it’s a complex thought, then some systematic organization of symbols.

Back to Liebniz’s calculating machine and his contributions to information theory and theories of computation. From wiki:

In 1679, while mulling over his binary arithmetic, Leibniz imagined a machine in which binary numbers were represented by marbles, governed by a rudimentary sort of punched cards.[79] Modern electronic digital computers replace Leibniz’s marbles moving by gravity with shift registers, voltage gradients, and pulses of electrons, but otherwise they run roughly as Leibniz envisioned in 1679.

Next up, I’ll be heading over to Liebniz-Translations.com to take a look at an infinitesimal portion of Liebniz’s vast writings. I may even read about his work on infinitesimals.

Visualizing US Population Data with R

My data visualization experience consisted of importing, analyzing and plotting some historical American population data from a CSV file with the statistical programming language R. It was my first dance with the R, and it took a little while to get going, but I think it was pretty rewarding and produced a very nice plot that accurately summarizes the data. I generated more plots with this CSV, but I’ve only included the standard plot here because the box plot I made didn’t feel very informative and was in fact a little confusing, so I omitted it and chose to focus on the informative, straightforward one.

Here is the entirety of my mini-CSV dataset:

“Year”,”Population”

1790,3.929214

1800,5.308483

1810,7.239881

1820,9.638453

1830,12.860702

1840,17.063353

1850,23.191876

1860,31.443321

1870,38.558371

1880,50.189209

1890,62.979766

1900,76.212168

1910,92.228496

1920,106.021537

1930,123.202624

1940,132.164569

1950,151.325798

1960,179.323175

1970,203.302031

1980,226.542199

1990,248.709873

2000,281.421906

There was another column of useless information which I cleaned out, to arrive at a nice, x axis = year, y axis = population in millions data structure.

Below is a plot from R, generated using the the plot() function.

Screen Shot 2014-12-24 at 7.36.53 PM

The first thing I noticed about this plot was how clearly it shows just how small America’s population was in the early days. In 1800, the population was 5.3 million people. More people live in the Boston metropolitan area today than lived in the entire nation at that time. In the 1860s to 1870s, the era of the American Civil War, the population was around 35 million souls (average of the 31.44 and 38.55 million population in 1850 and 1860. The death toll of that war was 620,000 American citizens from both the Northern and Southern states. 620,000 / 35,000,000 is 0.0177 – the percentage of the population that died in the war between our states. If we multiply that percentage by the current population of 320,052,000 we get 5,664,920. In today’s term, the scale of that war would have killed almost 6 million Americans.

Something else that is brought to mind looking at this plot is an idea that I’ve heard discussed a few different times, which is the fact that our system of government, the bicameral legislator, the method of electing Congressional representatives, etc. was designed for a nation almost two orders of magnitude smaller than the one we now find ourselves living in. From wiki:

We say two numbers have the same order of magnitude of a number if the big one divided by the little one is less than 10. For example, 23 and 82 have the same order of magnitude, but 23 and 820 do not.”[2] — John C. Baez

320,000,000 / 4,000,000 = 80

80 / 10 = 8

Almost two orders of magnitude from today’s population to the population in 1790!

It’s no wonder that we see such crippling inability to function as the normal state of being of our current House of Representatives. The body is not designed to accommodate hundreds of Congresspersons representing hundreds of millions of citizens. I’m not saying it’s a singular issue, that this is the only reason we experience legislative dysfunction, but it’s pretty compelling to consider the enormous population as something that would cause systemic problems for a governmental structure that likely was not created in anticipation of such a huge boom in population.

Adding a line:

Screen Shot 2014-12-24 at 8.19.43 PM

This small CSV data file wasn’t compatible with some of the more interesting plotting functions that R offers, like the histogram and more advanced box plots, and didn’t have enough data for a nice 3D scatterplot, but I think the plots produced were informative and concise.

Visualizing Data with R

As part the Fall semester edition of Digital History at NVCC, I’ll be creating a visualization of some data that should be publicly available on the internet.

This data visualization needs to be devoted to some historical subject, and should be accompanied by an explanation of what the visualization reveals about the data.

As soon as I hear “historical data”, my first thought is to swing by the one-stop-shop for historical datasets in the US: data.gov.

Here are a couple of the data sets publicly available for download that I’d be very interested in taking a look at:

  • Consumer Complaint Database. “These are complaints we’ve received about financial products and services.”
  • Food Access Research Atlas. “The Food Access Research Atlas presents a spatial overview of food access indicators for low-income and other census tracts using different measures of supermarket accessibility, provides food access data for populations within census tracts, and offers census-tract-level data on food access that can be downloaded for community planning or research purposes.
  • EDIT: Since these data sets are interesting, but not as historically-oriented as they need to be, I’m going to be exploring census data instead.

Since I’d like to work with R for this project, I’m going to take some time to do a few tutorials and look around for datasets that are in a format that is easy to work with in R. For example Excel seems to be an easy choice. I’ve got no experience with R, and have wanted to try to use it for a while, so this is my shining moment of opportunity. Ideally, I’d use JSON, as that is the format that a huge number of web APIs provide information in, but I’m not sure that I should devote too much time to learning R right now, as there are quite a few things I’ve got to do.

Wolfram’s Timeline of Systematic Data and the Development of Computable Knowledge

One of the many interesting projects required in our digital history course is the construction of a timeline of events for some historical topic, built on and made available on the web.

For my timeline, I had originally hoped to create a simple timeline on the development of mathematics through the ages. Since this is such a huge topic, I would necessarily had to have left out the vast majority of mathematical advancements, especially those developed during the age of Enlightenment, when the pace of progress in mathematics began to intensify  and reach a pace that has continued to this day.

As I researched this topic, I came across an incredible resource by Wolfram Alpha: the Timeline of Systematic Data and the Development of Computable Knowledge.

The timeline is divided into six sections:

  • 20,000 BC – 0
  • 0 – 1599
  • 1600 – 1799
  • 1800 – 1899
  • 1900 – 1959
  • 1960 – 2010

The last event is…. the development of Wolfram Alpha as a website.

Capture1

Besides the Wolf-boosting, the timeline is excellent and is something I’ll definitely be using as a resource if I stick with my topic of “development of mathematics.”

 

Some of the interesting timeline points that caught my eye:

1637: René Descartes

René Descartes introduces coordinate systems to allow geometry to be studied using algebra.


  • 1684: Gottfried Leibniz

    Answering questions using computation

    Leibniz promotes the idea of answering all human questions by converting them to a universal symbolic language, then applying logic using a machine. He also tries to organize the systematic collection of knowledge to use in such a system

  • 1687: Isaac Newton

    Mathematics as a basis for natural science

    Newton introduces the idea that mathematical rules can be used to systematically compute the behavior of systems in nature.

     

  • 1889: Giuseppe Peano

    Formalizing the rules of arithmetic

    Peano publishes axioms to give a complete formalization of arithmetic.

 

What is Digital History?

According to the wikipedia page on the subject, Digital History is “the use of digital media and tools for historical practice, presentation, analysis, and research. It is a branch of the Digital Humanities and an outgrowth of quantitative history, cliometrics, and history and computing.”

The wiki goes on to explain how the adoption of computers in academia in the 1960’s lead researchers to develop new, quantitative means of asking and answering historical questions, with computers as the enabling factor.

Two of the leading centers of digital history are quite close to NOVA. One of them, George Mason University, is about a half hour away, and houses the Center for History and New Media. Whether or not I’ll get a chance to stop by there and pay a visit I’m not sure, but it would probably be an exciting, informative visit.

As I sit here, trying to think of something to post on my blog, there are a couple of questions that come to mind:

-How does digital history affect my life?

As you live your life moment by moment, minute by minute and on through the weeks and months and years, all of that information about your life and about the society that you’re a participating member of falls under the purview of digital history. The social media posts, the (large) amount of data collected by Google and Facebook and other consumer technology companies, the voting records and credit card bills and cell phone usage and all of the other information you generate as you go on living your life, all of that forms a picture of who are and what you did and what you might be doing now. Gathering, collecting, organizing and reconciling all of those disparate pieces of data with each other is a central task for digital historians. One of the questions I’m interested in answering is how we deal with those tasks on a technical level. If all of this data is in different formats, and governed under different laws granting access to different people, and some of it’s proprietary and some isn’t, how do independent historians go about constructing accurate, detailed narratives about their chosen area of study?

I suppose the challenges are much different for different time frames. If you’re doing ancient history of some culture where written records are non-existent or almost non-existent, where physical artifacts are on display in museums only and inaccessible, how do you practice your craft?

The modern historian faces almost the opposite challenge: there is just so much data. So much data, on any given topic, and so much data even on the individuals  that make up some event. Let’s say you were doing some research on a factory worker’s strike in the 1850s, the amount of information you have to deal with is quite manageable, if not readily accessible. There will be news articles, some interviews, maybe some letters sent back and forth between activists and some documents before and after the strike. Maybe if things got bad there were some police reports and court documents and other legal documentation. Nowadays if you want to do a report on a strike (if you could find one in a developed country), you could do an entire report on a single, periphery member of the strike based on the voluminous amount of information that person likely generates every day simply by being a member of our “technologized” society. Not a word? Maybe it should be.

If I get a chance, that will be something that I’d like to read up on, discuss and post about: what kind of data is generated by people living today? Where does it go? Who owns it? Can you piece it together into a coherent narrative? Can an independent person without an enormous amount of resources and institutional support, say someone who is not a researcher at a well-funded University lab with a private army of grad students contributing to their work, do this?

Maybe.  Stay tuned!