Readers and robots

Publishing, on this side of the digital divide, has become not only simple but automatic. Messages are copied from storage to storage. Everything is a printing press, and we are all writers - producers of texts. But then, what does it mean to be read?

Write something and put it on-line. Within hours it has been read by a handful of robots - small programs crawling the web - indexing information for the search engines. But this is not what we want, is it? We want to be read . By a person. Not parsed by a program. We would like someone to react to the words we have written. But programs react. Sorting our text they reference it with actions that reflect on both syntax and semantics. With an increasing degree of complexity, I should add. So not mere reaction then - we want emotion. Irrationality.

Are there readers for all texts written? I'm guessing that there are already pages never visited by anything else than robots. Pages waiting to be read. All waiting for emotional response. Trees in a forest, falling. Can programs hear?

We are all readers, programs and persons alike, but what are the human role in a networked world? As publishing became free the value of our attention went up. That unique attention we bestow on someone when reading what they wrote; allowing ourselves to be influenced; allowing a message to stir up emotion. Reactions neither rational, nor random.

Is it our egos, digital prejudice, that makes us value human attention over that given by programs, or is there some quality in emotions that can not be programmed? The human reader, a gold standard in information economy. Perhaps today, but both humans and robots evolve.

'Information overload' is sometimes a term used to describe the stress and frustration felt by people who are too connected. I wonder if it really isn't more a question of growth strain? Most of us have been taught to analyze as readers, but we are changing into someone who has to associate and react.

To cope we shorten our attention span. On line texts today must be short and to the point. Many texts are likely competing for the reader's attention. Therefore, articles become summaries, blog posts are reduced to status updates and tweets - all evolving into information that may be take in at a glance and then dismissed, flagged as 'liked', or re-tweeted. Thereafter forgotten. The brief texts contains links to be followed or ignored. The footnote is the message.

We send pictures saying more than a thousand words, but only require a glance before it is decided if they will be shared in turn. But a picture does not say the same thousand words those who see it, even if it may convey a similar message to all. It is associative, approximate, and quick where the text is exact and time demanding.

Maybe this is the connected reader of the future. Our role as analysts is over, instead we associate and react. At most appending a sentence or a tag to the incoming message before it is forwarded on to the network again. The reader acts as relay of information, associating, sorting, but not analyzing.

But wait, isn't this the role I gave robots? The same programs that did not count as readers? Not far from, and as the programs get more advanced and the readers more connected both start playing similar roles, indexing information.

It may be that such future carry a small gem. A chance to understand what it is to be human; to be alive.

When all complexity is peeled away from the human onion, and what remain is a future reader - a connected entity, merely relaying and tagging information - can we replace her with a program? Can we replace everyone?  If yes, have we then proven that the mind is nothing more than a Turing machine; and if no, what secret lies there in humanity's core?

But it may be that this entire strategy is in vain. Perhaps any network of future readers, people and programs alike, is dead. Defunct. Maybe the human magic that made it tick lay in the skins we peeled away?