Ben Franklin and Newspapers in the Cloud

A while back now the Newspaper Club caught my eye. A simple, anyone-can-use printing service seems a great idea, and it is certainly positive. But I never mentioned it to anyone because, straight away, the concept felt tired (even to an unconditional-lover of print). I forgot about it. 

Then, this week, I read about some Journalists in New York (and John Paton at CUNY) who are making newspapers in the cloud. The ‘Ben Franklin Project’ seems like a more creative antidote, enabling collaborative, sustainable and potent publishing.

They are using free web-tools to produce an issue of their paper (hence in the cloud). But perhaps more importantly, they are proposing a way to reconfigure reporting and publishing, online and print, the relationship between the crowd and journalism. 

Here is a nice piece explaining the Ben Franklin Project

posted by Ossie Froggatt-Smith.

A sociological look at chatroulette, by Sarah Sternberg

The renowned sociologist Erving Goffman once said that “a conversation has a life of its own and makes demands on its own behalf. It is a little social system with its own boundary-maintaining tendencies; it is a little patch of commitment and loyalty with its own heroes and its own villains.” This quotation struck a chord with me a few weeks ago, as, for the first time, I navigated the treacherous waters of chatroulette. For those struggling to keep up with the pace of the zeitgeist, chatroulette is an internet programme that allows users the possibility of having a video chat with any other user, anywhere else in the world, at random.

Chatroulette bucks a growing trend - namely, the refinement and tailoring of an ever more personalised internet experience. On facebook, we only see updates from our friends; on Twitter, only those that we follow appear on our homepage; and our RSS feeds receive news from a finely tuned collection of websites and blogs. Such constant refinement is part of a wider process that seeks high levels of personalisation in all areas of consumer life: if I like my coffee grande, iced, with an extra shot of espresso and 2% milk on the side, you can bet my internet browsing will be just as fussy.

Goffman’s sociology is deeply concerned with the rules that govern social life: when we buy a coffee, go to the doctor, or studiously ignore the homeless person on the street, an intricate order with scripted interchanges is ritually applied, one that is based on context and roles. You wouldn’t, for example, ask your doctor for a Grande macchiato to go, or ask a homeless guy to diagnose your rash. Chatroulette is the very opposite of this: occupying a space that is geographically unlimited and contextless, it does away with the predictability, ritual order and rules associated with most of social life.

For Goffman, social interaction is characterized by heroes and villains on the basis of who accords or fails to accord with the social rules- the villain of the piece being the person who laughs in the face of said rules, such as the tramp who does not play his role properly, and throws the passer-by’s money back in her face.  On chatroulette, in Goffman’s analysis, we are all villains to a greater or lesser degree, breaking the rules of social life by the very fact that, in this virtual reality, no rules apply.

I think it’s telling that, during my first chatroulette experience, one of my first encounters was with a person wearing a mask. Sinister as this was, he took it off to reveal yet another mask underneath, which not only provoked laughter from my housemate and me, but revealed further what makes chatroulette so compelling and so terrifying. Jumping into a social interaction with no context, no defined roles and no rules, we are all equal players of the game, free to define the situation as we see fit and free to present whatever type of front we wish without fear of being found out. We have no idea who is on the other side of the computer screen, and more disturbingly, we can never really know.

Goffman claims that rules, the contracts that govern social life, are necessary because of the fragile nature of social interaction and the social self, over which hangs the threat of constant annihilation.  In throwing away the rulebook and leaving us all to fend for ourselves in the precarious battleground of internet interaction, chatroulette is proof that this annihilation has already begun.

chat roulette from Casey Neistat on Vimeo.

Biodiversity on the rocks: what’s a conservationist to do? By Alex Cagan

Especially this is written for those souls… who are prevented by the invincible decrees of Fate from ever seeing the wonders of the Wilderness save in the pages of a book.” - Dedication from Tales of an Empty Cabin by Grey Owl.

In 1931 Archibald Belaney, also known by his adopted Ojibwa name “Grey Owl”, wrote about the importance of conserving the beaver and its habitat.  A fur trapper and fantasist who falsely claimed Apache descent, the British-born Belaney was nonetheless one of the first and most effective “apostles of the wilderness”.  What would he have written today, in 2010, marked as the International Year of Biodiversity?  The latest update of the IUCN (the Cambridge based International Union for the Conservation of Natural Resources) Red List of Threatened Species states that 21% of all known mammals, 30% of all known amphibians and 12% of all known birds are threatened by extinction.  According to Jane Smart, director of IUCN’s Biodiversity Conservation Group “The scientific evidence of a serious extinction crisis is mounting”.  For many species, it is already too late.  The Yangtze river dolphin was declared extinct in 2006 after a failed six-week search.  The orangutan, our fellow Great Ape, is predicted to be ecologically extinct within 50 years.  Faced with accelerating rates of extinction worldwide, despite their best efforts, the conservation movement has been forced to go back to basics.

Biodiversity is defined as the “totality of genes, species and ecosystems of a region”.  This three tier approach is fundamental to how we classify biological variation.  It may also prove essential to how we manage conservation.  Historically, ecosystem diversity has been the dominant ideology amongst conservationists.  Habitat destruction is the primary cause of species extinction, so most conservation groups prioritise ecosystem preservation.  It makes sense: remove the rivers and where is the beaver going to swim?  Unfortunately, habitat preservation is often extremely difficult.  Many biodiversity hotspots are threatened by the twin pressures of human settlement and the development of land for commercial purposes.  The Amazon rainforest, which has the highest level of biodiversity in the world, is being deforested so rapidly that the Brazilian government recently announced it would be happy if, in 11 years’ time, the Amazon was being deforested at an annual rate of three cities the size of Sao Paulo.

The ecosystem-centered approach to conservation works in principle but fails in the all too frequent cases where habitat loss is unpreventable.  Conservation happens within national boundaries and as such issues of economics and sovereignty are bound to come in to play.  The pressure the West exerts on developing countries to preserve their natural habitats is often interpreted as an excuse for hindering the economic development of their competitors.  In the worst cases regions are preserved only to become expensive tourist destinations for wealthy foreign tourists, pockets of biodiversity nibbled at the edges by impoverished local communities denied access to the land.  Additionally, many developing nations regard biodiversity as one of their key resources and interpret attempts to store the DNA of their native species as imperialistic biopiracy masquerading as conservation.  The hostility and conflict generated by such approaches calls into question the motivations of conservation in a globalised society riven with inequalities.

Regardless of this debate one thing is certain.  Once a region’s ecosystem diversity has been critically reduced the species and genetic diversity fall like dominoes.  Conservation strategies focused on species level diversity, such as captive breeding programs, have long been used in such cases.  These programs are difficult to manage, as captive populations require regular influxes of fresh genetic diversity to avoid inbreeding.  The ultimate aim is the re-establishment of species in their original habitat.  The concern is that most captive populations face inevitable deterioration in artificial conditions, destined to become shadows of their former selves.

The most recent, and controversial, conservation strategy aims at the lowest level of the pyramid of biodiversity, genetic diversity.  Just as the species diversity approach to conservation depended on the emergence of the zoo, the genetic diversity approach is the result of advances in molecular biology.  The Frozen Ark Project, based at the University of Nottingham, aims “to conserve the genetic resources of the world’s most endangered animal species” by cryogenically storing their viable cells and DNA.  While the cost of sequencing entire genomes continues to shrink as technology improves the feasibility of storing the genetic code of an enormous variety of endangered species is increasing.  The scimitar-horned oryx, extinct in the wild since 2003, was the first animal to enter the Ark. 

The founders of the organization, Dr Ann Clarke and Professor Bryan Clarke, were inspired to start the project after they witnessed first hand the disastrous impact that human activity can have on biodiversity.  Speaking with me in Cambridge, Ann Clarke recalled how, while on a scientific expedition with her husband to collect tree snail specimens from the Polynesian islands, they noticed a worrying correlation.  While they found the snails they were looking for on uninhabited islands, on the islands with human populations they struggled to spot even single specimens of the snails.  The cause of this alarming decline was soon traced to the introduction of the predatory rosy wolf snail (Euglandina rosea).  Distressed by the rate at which local species were vanishing, they decided to do something about it.

For many of the snail species the extent of the damage to many of the island ecosystems was already irrevocable.  Traditionally, international captive breeding programs were the only solution to maintaining species when their original habitat was no longer viable.  However, such programs can take years to organise and are very costly, bad news in a sector where funding is always extremely competitive.  The Frozen Ark Project provides a relatively inexpensive way to preserve the essential elements of a species, in the form of viable cells, DNA and RNA.  In many ways it is a tragic reduction of a living organism.  However the alternative is extinction.

With each species that goes extinct a record is lost, an exquisitely detailed account of billions of years of life, each one unique but all interconnected and sharing a common starting point.  The DNA of species is worth preserving if only for the grandeur of the story it tells.  It is a story in a language we still cannot fully comprehend.  If efforts are not made to preserve the DNA of species nearing extinction then there will be gaps, hundreds of missing pages, in the story.  Beyond this the DNA not only provides a history of the past, it may also provide a hope for the future. 

While the prospect of resurrecting species from their DNA alone remains a tantalising (and ethically fraught) possibility, the modus vivendi of the Project is to preserve the genetic record of species so they may be studied for their scientific insights.  There are voices within the conservation movement calling such approaches defeatist, diverting critical resources from the preservation of existing species and ecosystems.

What would Belaney have made of these modern approaches to conservation?  No doubt he would have struggled to see beaver DNA in storage as a substitute for beaver thriving in the woodland habitat he loved.  Nor should we lose sight of his mission.  Twenty-first century conservation will depend on the successful integration of all three levels of biodiversity management – genes, species and ecosystems.  For example, genetic conservation can strengthen species conservation by providing the genetic diversity to reinvigorate captive populations.  If, acting together, they can prove to be more than the sum of their parts, then we may yet lay claim to old Grey Owl’s approval.

The Thought Police by James Randell

The Thought Police by James Randell

Is there anybody in there? Jonathan Webb reports from science’s erstwhile final frontier: consciousness.

How does conscious experience arise from a kilogram of damp, grey, rumpled tissue? There is, of course, no easy answer.  Consciousness tends to make scientists nervous - even the ones who have pitched their careers towards unravelling the billions of cells and thousands of miles of fibres that make it possible.  The field of neuroscience employs an army of researchers, nearly all chiselling slavishly away at miniscule details of how the brain functions.  Precious few will announce “consciousness” as the target of their research, for fear of becoming a laughing stock.  The last thirty years has seen no shortage of treatises on the subject, but these have mostly been written by philosophers, or by Nobel-winning scientists tending towards philosophy in their old age (see Francis Crick or Gerry Edelman).  Nevertheless, scientists today can examine human brain activity more precisely than ever before and plenty of ongoing research, including the two examples discussed below, directly or indirectly concerns this contentious phenomenon.  So why not shoot for the biggest teddy in the sideshow?

The problem is partly linguistic, and partly historical.  “Consciousness” is an intimidating, all-encompassing term that arguably has no place at all in scientific discourse.  It can refer to many different things: awareness, wakefulness, experience, self-perception, thought… In fact, it probably amounts to a delicate assembly of several processes, such as our ability to perceive the world and to lay down memories of those perceptions.  As we will see, modern science prefers to consider such components individually, but the study of consciousness as an entity is even older than the scientific method.  As a great, inward mystery, it has occupied philosophers since they first contemplated their own existence.  In the 17th century Descartes articulated the problem as one of reconciling mind and matter: how do thought and brain mix? Where is the intersection? He famously decided it was in the pineal gland, a tiny nub near the centre of the brain which in fact secretes melatonin.  Misplacement of the soul aside, Descartes’ “dualism” dominated thinking on the subject for hundreds of years and it is still the way many of us conceptualise the problem today.  Biology was much slower to weigh in; over subsequent centuries, knowledge of the brain’s anatomy steadily accumulated but the whole enterprise was somewhat derailed by the rise of phrenology in the early 1800s.  The deluded confidence with which phrenologists ascribed mental faculties to lumps and bumps in the skull remained disastrously high for most of that century and it was not until the turn of the next one that a new age of science, spearheaded by two completely new disciplines, promised real progress. 

In the 1890s, William James founded a new science with the question of consciousness at its core.  Indeed, for much of his two-volume opus The Principles of Psychology, James prattled on about it in a very natural and intuitive way.  He saw it as a fluid sequence of thoughts and perceptions, for which he actually coined the term “stream of consciousness”.  The best way to study this stream was Introspective Observation: “looking into our own minds and reporting what we there discover”.  So-called “introspectionists”, particularly from two laboratories in Leipzig and Cornell, spent years quizzing carefully trained experimental subjects about the contents of their own minds, trying to assemble a periodic table of perception.  Unfortunately their painstaking labour was hamstrung because the data were impossible to standardise; the Cornell camp ended up with a grand total of 44,000 discernible sensations while Team Leipzig managed only 12,000.  In between the wars, a new school of psychology tried to secure more credibility by restricting its study to observable behaviour.  The “behaviourists” dealt in input and output, stimulus and response, and basically ignored the possibility that anything interesting might happen in between.  Things improved with the development of cognitive psychology in the 1960s, but only insofar as “consciousness” reappeared as a box in a flow chart.  The cognitivists treated the brain like a computer, and made good use of new ideas about information processing.  Overall however, psychology in any of its forms has never managed to firmly fasten its hypotheses about consciousness to the hardware of the brain.

What about neuroscience? This was the other horizon-shifting success of the late 19th century: experiments that described the brain’s electrical currents and, crucially, its constituent brain cells.  Camillo Golgi and Santiago Ramón y Cajal, ferocious intellects and fierce rivals, together accomplished the latter and had to share (tersely) the 1906 Nobel Prize.  Their trailblazing microscopic images, still remarkable to behold, presented the neuron to the world.  Here at last was a plausible building block for the nervous system - intricate and varied in structure, and capable of receiving and forwarding electrical signals.  This discovery changed the game forever, and for the branch of science that began here with the “neuron doctrine” and soon co-opted its name, the 20th century was one of almost indescribable progress.  Neuroscientists elucidated countless details of how neurons transmit and aggregate signals, how they receive sensory stimuli and effect responses.  But despite all these advances at the level of neurons themselves, or perhaps because of them, the question of consciousness seemed distant.  It acquired the aura of the unassailable, and the business of relating it to brain cells firing was done speculatively and mostly in the twilight of distinguished careers.  After years publishing “serious” science in top journals, it might be said of any successful neuroscientist, especially one with a propensity for pontificating, “Oh, isn’t it time for them to write a book on consciousness?” Such cynical murmurs are perhaps unfair; many such books exist and they are not all without merit.  The fact remains, however, that most working neuroscientists have shied away from the subject.

And yet today, we have the gadgetry to look inside the active - conscious - human brain and detect signals that relate very closely to the firing of neurons.  These expensive toys are the stuff of William James and Camillo Golgi’s wildest dreams, and they bring psychology and neuroscience into the same room.  Into the same enormous tubular magnet, in fact, for the technique in question is fMRI, or functional magnetic resonance imaging.  In a nutshell, fMRI uses a really, really strong magnetic field to detect tiny changes in blood flow within the brain; active brain areas receive extra blood and active brain cells suck more oxygen out of it.  A computer records these changes as tiny “voxels” of activity in a detailed 3D model of the brain, while that brain’s owner lies patiently inside the fMRI scanner, trying not to move.  Since the early 1990s, scientists have been harvesting these flickering populations of pixels and gaining glimmers of insight into every imaginable aspect of human brain function - including consciousness.  Mind you, they are still reticent to aim straight for the biggest prize; a giant teddy bear is, after all, awkward to carry home.  Scientists thrive on details, and will nearly always tackle big problems by breaking them down into small pieces.  This approach, coupled to the power of functional brain imaging, is beginning to bear fruit.  There is even a strong case to be made, by scientists and philosophers alike, that once we understand the component processes that together might be called “consciousness”, there is nothing further to explain.  I would like to leave that particular question in the “too hard” basket (philosophers even call it “The Hard Problem“ - there are books on it) and instead simply offer a sample of current research.  I have plucked two examples directly from the journal pages of early 2010: in one, subjects stare at invisible houses; in the other, a vegetative patient answers questions using only his brain.

Perception is one component of consciousness that is particularly accessible to experiments.  If we are conscious of a stimulus, what difference does this make to the neuronal activity that it stimulates? It may seem paradoxical that your brain processes stimuli of which you are unaware, but this is precisely the circumstance explored by Aaron Schurger and his colleagues at Princeton.  Writing in Science in January, Dr Schurger reported experiments in which volunteers lay in an fMRI scanner looking at countless pictures of faces and houses, some of which they couldn’t see.  These “invisible stimuli” are created by presenting simplified monochrome illustrations separately to each eye; if the foreground and background colours in one image are reversed, the image disappears.  Dr Schurger’s subjects were asked to state whether the drawing before them was a face or a house - an easy task, until an example with reversed colours was presented.  In these cases, the subjects’ brains still showed activity consistent with the image being processed, even though they couldn’t perceive it.  The researchers looked in the visual cortex for differences separating the activity in these trials from those in which the same image was consciously perceived.  Interestingly, they found that the activity during conscious perception was more reproducible than during invisible stimuli: the spatial pattern of active voxels was similar from trial to trial.  It has already been proposed that what designates a particular pattern of brain activity “conscious” might be its intensity (how many neurons are firing, and how many times) or its synchronicity (the degree to which the firing events are coordinated); Dr Schurger’s results suggest that consistency is also key. 

This reproducibility, however, can only be observed across multiple presentations of the same stimulus; it cannot be the decisive factor that the brain uses to make a perception “conscious”, because we don’t need multiple presentations in order to see something.  These comparisons are a privilege of experiments like Dr Schurger’s.  They inform us about the nature of the processes themselves - in this case, perhaps the neuronal activity behind a conscious visual perception is more consistent or stable because it is fine-tuned by connections from other brain areas.  (This is beyond the reach of Dr Schurger’s data, but it is a reasonable hypothesis.) These other linked areas might be responsible for other facets of consciousness.  Importantly, the connections are likely to act in both directions, so that visual perceptions influence other processes, like attention or decision-making, but are also constrained by them.  “Recurrent connectivity” that goes back on its tracks like this is a common feature in the literature of consciousness.  Gaining access to various parts of the brain, and receiving input from them in return, is likely to be crucial for rendering information conscious. 

A more fundamental distinction is between consciousness and unconsciousness.  In this case we are concerned with consciousness as “wakefulness”, an aspect that is also open to investigation with fMRI.  It is particularly important in the clinical setting, where families long to communicate with loved ones devastated by brain injury, and where the decision to provide or remove life support can rest on a definition or a diagnosis.  In a fascinating series of experiments over several years, scientists from Cambridge and Liege have performed fMRI on comatose patients and found that some patients classified as “vegetative” can in fact alter their brain activity on command.  Four years ago, Adrian Owen and his collaborators identified such a patient and found that if she was instructed to imagine playing tennis or moving around her own house, areas of the brain specific to each task would light up.  A new study from the same team, published last month in the New England Journal of Medicine, describes the proportion of minimally responsive patients in whom such function might be uncovered (specifically, 5 out of 54).  These numbers are not high, although they do make a compelling case for more careful diagnosis.  Moreover, the fact that activity in specific regions is evoked following specific instructions does not mean that sundry other elements of consciousness, such as self-awareness and memory, are present.  To trivialise the whole affair - and to spin Descartes in his grave - it will never be as simple as “I think I’m playing tennis, therefore I am.”

In a few special cases, however, it might come close.  Dr Owen’s paper has an intriguing addendum, in which one patient was instructed to answer yes-or-no questions by switching between spatial thoughts (navigation) and procedural ones (tennis).  Remarkably, he “answered” correctly when questioned about the names of his family.  This is a breathtaking finding, tinged with sadness; it offers the possibility of communicating with outwardly unresponsive patients, but it requires one of the most expensive and computationally demanding techniques in neuroscience.  Bringing this sort of communication to the bedside is an enormous challenge but Dr Owen and others are already working on the possibility of using electroencephalography (EEG), which requires electrodes taped to the scalp instead of a ten-tonne, multi-million-dollar magnet.  Years of research and myriad ethical considerations remain to be negotiated, but the potential benefits for patients and their families are immense.

These two recent papers approach consciousness from different angles, as well as from different sides of the Atlantic.  They illustrate the slow but steady progress that science is now making on the subject.  This is quite a contrast to the state of affairs just over a century ago, when the first psychologists and neuroscientists were poised to storm nature’s “last citadel”.  Perhaps partly because they still saw it as such an ultimate, singular challenge, their advances across its walls were slow or stalled for many years.  Now, it is a routine subject for experiment.  The boundary between psychology and neuroscience is blurred by techniques like fMRI and for researchers chipping away in the field, consciousness is not the intimidating edifice it once was.  It is an observable phenomenon, a characteristic of the pixels on the screen.  Scientists are reticent to talk about it in the same terms that made it an inscrutable apparition, because they view it as an assembly of complex - but comprehensible - processes.  By understanding its components, we may explain it as a whole, or perhaps come to appreciate that it is simply the sum of its remarkable parts.  We can also start to understand the consequences when this physical process, the flood of electrochemical activity that makes us who we are, is damaged.  And then, if there is anybody in there, we might just be able to say hello.

The Thought Police by James Randell

Lights, Camera, Inaction: public debates and the erosion of authentic engagement. By Abigail Jones

In societies where modern conditions of production prevail, all of life presents itself as an immense accumulation of spectacles.  Everything that was directly lived has moved away into a representation

Guy Debord, The Society of The Spectacle (1967).

The social critic Theodor Adorno famously complained that a modern alliance of culture and entertainment both debased culture and elevated frivolous amusement.  It is unlikely he would have approved of Intelligence Squared, whose well-attended and very well-broadcast public talks promise “discussion, conversation, and sexy debate”. 

On 23 February the organisation put on a talk in the British Council’s Cadogan Hall in London in partnership with the BBC and Our Shared Europe.  The motion of the evening, chaired by the BBC’s efficient and statuesque presenter Zeinab Badawi, was “Europe is Failing Its Muslims” (how we love these violently reductive provocations).  Supporting the motion were Swiss scholar and writer Tariq Ramadan and Petra Stienen, formerly a Dutch diplomat in the Middle East, and a senior adviser in social development.  For the opposition was Douglas Murray, controversial neoconservative columnist and director at The Centre for Social Cohesion.  His second was the Danish illustrator and journalist Fleming Rose, who infamously commissioned the Jyllands-Posten cartoons of the Prophet Muhammad, which fuelled violent condemnations and counter-condemnations across the globe in 2007.  Rose has not lived peacefully since, and his reputation was a reason for the neon column of police vans outside the building.

Intelligence Squared debates follow the “Oxford style”: each speaker is given six minutes to develop their case, after which there is a period of free argument across both sides and time for responses and questions from the floor.  The debates are recorded with a live audience and shortly afterwards the edited versions are launched with great efficacy onto Youtube, television, radio and iTunes.  The net the organisation casts is vast - an estimated 72 million have watched their debates on BBC World News alone.

That figure is a striking signal of how knowledge exchange and the political economy have changed and been re-codified in the age of mass communication.  The easy dissemination of visual material (persuasive by its mimetic qualities) has changed our relationship to information, so that our consumption of it is passive but total.  We read, download, watch, stream with the habituated assumption that these processes will be swift and untaxing.  However, when sight is elevated as the pre-eminent sense in a society and takes pride of place as the domain where knowledge can be consumed, its members are not activated as participants, but as spectators.  In this system, images and facts are deemed authentic and easily understood simply by their proliferation and accessibility.  The pseudo-proximity of a world mediated by images, however, conceals the fact that we are no longer engaged with genuine lived experience.

In Cadogan Hall, cameramen circled the speakers on their illuminated platform, and swept past the audience on wheels down the aisles.  Badawi trailed to the camera, and directed the affair with imposing telegenic authority.  It seemed that the event’s dominant objective - to be transmitted across the world as an arresting, topical piece of public broadcasting - was the main reason for its vague, unengaged atmosphere on the evening itself.

When public debate and political life meet as spectacles or performances –as this one did- the omniscience of the camera makes genuine interaction near-impossible.  There was a sense of profound self-doubt in the audience at “Europe is failing its Muslims”.  The camera lens wields a unique power and authority; when people are aware of it, their submissiveness is instinctive.  The prospect of being seen by the camera – known by it, seized in time by its gaze – engenders in the subject an extraordinary moderation and self-reflexivity.  Dissent is extinguished, complexity ironed out.

Here no embarrassment or slip-up can go unseen or be forgotten.  Once committed to the tape, failure exists forever, so it is best to avoid trouble altogether.  Everyone at the event understood that this piece existed for a mass audience, and that they were therefore implicated in its construction.  Thus the event was not really present at the point of its origin – caught in a limbo between everwhereness and nowhereness, it could never be grasped.

As the debate progressed, even the speakers became dispirited with the realisation that they were mired in little more than an empty simulation of exchange, that they had been lured into unholy reductionisms.  But then, this wasn’t a dialogue – it was a piece of theatre, straining with a sense of confusion and vagueness which the organisers had to conceal.  The potential for productive complexity in this exchange was crushed by the very fact that public debate now only exists in the virtual domain of mass media.

The work of the sociologist and philosopher Jean Baudrillard considered how interactive technologies have supplanted political life and industrial production as the dominant principles that organise society.  In Simulacra and Simulation (1981), Baudrillard describes a transformation from society where the mode of production dominates to one where the main social principle is the code of production.  In this new condition, all of social life is structured by and mediated through signs, so that even labour no longer exists as an active force of production, but is simply a sign in the midst of a world of signs – an expression of one’s social position and incorporation into wider society.

For Baudrillard, the modern age is governed by an excess of representation and reproduction, and signs are now the primary determinants of social life.  However, where the representation of something begins to stand for that something - creates that thing’s reality and precedes it as a sign – it becomes increasingly difficult to discern “the real” from the symbolic copy. Thus, according to Baudrillard, we have no means to distinguish the authentic thing from its simulation.  The two have merged irreversibly so that the virtual is now also the real – a “hyperreal”.

Even the deployment of statistical data and opinion polls fits into the model of public life as spectacle.  With their simplified theatricality, produced in an atmosphere of sensory and informational excess, such quantitative abridgements exist with a samey dead-end meaninglessness.  Though statistics pack a dramatic and persuasive clout, and are consequently used and repeated with abandon, they are little more than instructive simulations: they announce that something is supposedly a certain way and people adjust their opinions, assumptions and social behaviour accordingly.  With the spectacle of media as the dominant mode, public televised debates like this one structure social space and determine people’s relationship to the world.

I’m sure that Intelligence Squared resist the idea that televised debate is homogenizing, and maintain that it is potentially enlightening, cohesive, dynamic - and in the public interest.  In their mission statement, they insist that there exists across the British public a “pent-up demand for participating in the intellectual struggles of the day”, and that its “hunger […] to be involved in such intellectual tournaments is undeniable”.  Many would argue – as Douglas Murray did – that such events are important and productive reflections of our society’s tendency to democracy.  Indeed, Murray rebutted the idea that Muslims were undervalued and demonised in Europe by claiming that the very existence of such a debate was proof of Europe’s tolerance, nay its “suicidal generosity”.  Although Murray dutifully played the role of the dashingly intolerant and intolerable quipper, the audience murmured their collective acquiescence on this point.

However, the idea that televised public debate or news is implicitly valuable is trotted out without much reflection.  This kind of event isn’t necessarily beneficial to everyday life and social attitudes at all.  It isn’t even informative, in the strictest sense of the word.  If anything, it is dis-informative.  Although the very fact that events like this debate are organised so frequently seems to tell us that our society is politically engaged, reflective and informed, this is not necessarily true.  This kind of exchange of information does not mobilise the social field – it deactivates it.  In fact, I would dare say that public debate’s overproduction in the media (through televisation, streaming, blogging) stands in direct proportion to the degree to which authentic public engagement has disappeared.  That which is absent in real life can always be generated in the auditorium, cutting room and interface. But when social relations are determined by the terms of media communications, the result is the growing erosion of social life.

The televised debate is now available on the Intelligence Squared website. However, the withdrawn, unengaged, apathetic nature that swam in Cadogan Hall that night is not apparent in the edited program: the tightly regulated lethargy is well-concealed by the sensuality of the spectacle. But then, no matter what happened in real life, “Europe is Failing its Muslims” was always destined to succeed as a broadcast. I fear we would not tolerate anything less.

Dave on the make: Trevor McDonald meets David Cameron

I am not ashamed (well, I am a bit) to admit that I’m the sort of voter who could be swayed by a piece of programming – but then so are many people.  David Cameron knows this and it is pretty startling, therefore, that he thought that having Trevor McDonald follow him around on TV was the way to win votes.  Not to be partisan: it is startling that Gordon Brown thought appearing on Piers Morgan’s Life Stories was a good idea too; but this was sillier. 

Largely this was because Cameron is fundamentally sillier than Brown, but it was also because the programme gave him so much more scope to be silly.  We saw Dave on the train, Dave on the phone, Dave on the sofa; but the whole thing was Dave on the make, and not even subtly so.  

It’s worth a look, but in case you get distracted on your way to the video by Michael Winner’s Dining Stars or something, here are the moments everyone should know about:

1) “If the next election is about, ‘Let’s not have a posh Prime Minister,’ I’m not going to win it.”  You don’t say.

2) TMcD describing the moment where SamCam had to go back to Bristol to continue her art degree, and David back to London – “And so began… a long-distance relationship.”

3) DC informing TMcD, with a look of genuine wonderment, that “the thing about Samantha is that because she has her own job, she just has this 40,000-foot view on what I do.”

4) The bit where DC tried to buy some food from a train station’s vending machine, and some wags joshed him from across the platform: had he put an expenses form in for that?

Posted by Roberta Klimt

WMDs on the WWW.

The possession and disclosure of information debate is doing the rounds. gnome and I are positive about spreading it. Others, not so.

It is not new, to whine about impinging security cameras. And I understand there is a commercial imperative for some companies, specifically those with a profile as impressive as Google’s, to be conscious of their strength. To this end, we have seen some programs and programmes aiming to destroy information. At gnome we posted Suicide Machine, and The Economist recommends the data-liberation efforts of Google itself.

The loudest pro-deletion voice is that one yelling RIGHT TO PRIVACY. But mass data-storage is not a new threat either. Digital records have replaced material ones, but what sort of information do we give out online that we kept private before? And what sort of information do we give out against our will? Very little data is given out against my will, which I would not have given out offline. It strikes me that a fuss is being made over the Internet because it is the Internet.

And this is not just silly but also dangerous. Deleting information online has already caused problems for enquiry. Paper records are not possible to destroy with such flippancy. And online trails can be harder for lawyers or police to follow than offline ones. They may quite happily not exist, unnoticed.

The delay caused by, for example, the Public Records Act of 1958, will multiply this loss by an unknown (my guess is: a lot) before it is realised. Like other problems facing human society, mass digital information loss will probably hit when it is already too late.

posted by Ossie Froggatt-Smith

Lies of the Land: Edward Randell explores the art of mapping

Now when I was a little chap I had a passion for maps. I would look for hours at South America, or Africa, or Australia, and lose myself in all the glories of exploration.  At that time there were many blank spaces on the earth, and when I saw one that looked particularly inviting on a map (but they all look that) I would put my finger on it and say, When I grow up I will go there.

Joseph Conrad, Heart of Darkness (1899)

…any map, any reduction of a complex landscape into two clean, clear dimensions, somehow thrilled and comforted me.  More than thirty years on, it still does.

Mike Parker, Map Addict (2009)

Inviting yet comforting.  Between them, Marlow and Parker come close to pinning down the contradictory allure of maps.  They are inviting – even after the blank spaces have been largely filled in – because their unfamiliar shapes and place names promise exotic adventure.  They are comforting, because this exoticism is rendered unthreatening by its containment within two dimensions, within blocks of pastel colour, within straight grid lines.

Nothing more clearly embodies our impulse to rational order than maps.  Yet they also provoke reactions that spring from other, entirely visceral impulses.  As Frank Jacobs, creator of the wonderful Strange Maps blog, puts it:

There is the instrumental part of maps and there is also an emotional part of maps – that unquantifiable emotional connection to that type of representation of reality.  I’ve had so many responses to my blog from people saying “when I was a kid I used to read atlases like they were books, because every page was like a story, and I’d get really engrossed in it, and travel the coastline and see where the borders were, and imagine being there and how people there lived.”

However loudly the maps might protest that they are not bedtime stories but sensible, empirical creatures, we instinctively know better.  The border between cartography and imagination has always been, at best, sketchy.  Literature, painting and even music are littered with maps, grounding their artifice in a sense of the “real”; meanwhile, maps often stylise “reality” to the point of artifice or fantasy.  And the best maps are themselves works of art.

“Are you saying the map is wrong?”

Says Jacobs:

We trust maps to tell us something about our spatial relationship, but we don’t always get the fact that they’re also lying to us.  On a very fundamental level they’re two-dimensional representations of a three-dimensional reality, so they always distort the surface.  And also in the selection of what is put on the map, they will leave out a lot.  The mapmaker is like the editor of the reality depicted in the map, so it reflects a subjective point of view.

Maps necessarily show us only a thin sliver of the world: they are defined by the variable or variables they document, and also by the biases of their “editors”.  For instance, cartographers have long argued about the best way to flatten out our globe.  The map of the world as we know it in Britain betrays a number of prejudices: it places us in the middle of the map, in the top half, and by using the traditional Mercator projection it makes our land mass disproportionately large.  The West’s inflated self-importance is not only reflected in such a map but, arguably, exacerbated by it.  Since the 1970s there have been voices clamouring for the Gall-Peters projection to be used instead.  Their side of the debate is perhaps best summed up by The West Wing.

Worthy as it may be, the Gall-Peters projection is fairly useless: the Mercator version, at least, gets the shapes right and enables maritime navigation and exploration.  Indeed, we owe our world map to the exploratory (imperialist) impulse described by Marlow in Heart of Darkness: the desire to discover new lands and stick a flag in them. From about the 15th century onwards, maps had to be instruments, where before they were often content to be symbols.  Medieval maps like Hereford Cathedral’s famous Mappa Mundi would certainly not have got you to America:

These maps were more concerned with piety than accuracy.  In many cases, ecclesiastical map-makers knew better, but chose to ignore topographical realities in the service of religious symbolism.

In fact, for all the developments in cartography since the 13th century Mappa, the inclusion of intentional errors in maps is a practice that continues even today.  The maps of the Ordnance Survey are dotted with tiny fabrications and inaccuracies, though piety doesn’t enter into the equation: the mistakes are there to catch out plagiarists.  Supposedly, every page of the London A to Z includes a bogus street.  Frank Jacobs calls these legendary “trap” streets “the Loch Ness of the A to Z” – he has never been able to locate one – and it’s an intoxicating thought, that even this most sensible and everyday of maps could have deliberate mistakes woven into it like a Persian rug.

Bottling the Smoke

If maps are the ultimate expression of the human drive to rational order, it follows that they are most desperately needed in that embodiment of the human tendency to chaos: the city.  More than any other environment, the sprawling metropolis must be refracted through the cartographer’s prism before we can begin to comprehend it.  As Peter Ackroyd puts it in London: The Biography (2000), “the mapping of London represents an attempt to understand the chaos and thereby to mitigate it; it is an attempt to know the unknowable.” The iconic Tube map, originally designed by Harry Beck in 1933, represents one of the most succesful efforts to tame the wildness, smoothing out the miles of tunnels to produce a stylised representation that is topologically correct (the lines intersect at the correct points) but topographically wildly inaccurate.  Taken a step further, the rational urge to disentangle the urban maze produced Christopher Wren’s 1666 plan for London after the Great Fire.  Wren’s vision of straight, wide streets and neat piazzas was as unthinkable to Londoners then as it is now: this beautiful but over-optimistic map was as far as he got.

Simon Foxell in Mapping London: Making Sense of the City (2007) compares the mapmaker to a sculptor who “must chip away at the raw block of material that is the city to reveal the shape and representation hidden inside.”  The resulting artefact will inevitably be shaped by its maker’s purpose and priorities.  Take for instance John Snow’s 1855 map of cholera cases in Bow, which led to the discovery that cholera was water-borne.  Or the National Temperance League’s ‘Modern Plague of London' map (c.1886) which marked the city’s pubs as pox-like red dots.  Or the 'Circuiteer' (c.1847), overlaid with one-mile diameter circles to enable the user to calculate cab fares and avoid being swindled.

The recent success of Secret London – originally a Facebook group which spawned hundreds of imitators, and now a website – attests to the continuing desire of Londoners to “know the unknowable”.  The site (still, at the time of writing, a work in progress) allows users to share their recommendations for the city’s lesser-known haunts, and to plot these on a collaborative and idiosyncratic map using Google Maps and its Local Search API.  ”It’s about reawakening your experience of the city,” the site’s founder Tiffany Philippou told me. “It’s about people talking amongst themselves and sharing their different experiences, from their favourite park bench to the best places to look for graffiti – everything is in one place, and easy to find.”

The “Secret” part of the name may have contributed to the group’s initial appeal, but it is slightly misleading: there is nothing particularly clandestine or underground about the website.  Rather, it taps into a desire, as old as the first London maps, for a comprehensive, user-friendly document of the city.  The difference here is that the availability of Google’s mapping technology renders such an ambition far more achievable.  The sheer scope and detail of Google Maps and Google Earth is astonishing, but Google may even be outdone by Microsoft’s Bing, whose augmented-reality maps were recently unveiled by developer Blaise Aguera y Arcas.  Breathtaking as these are, they raise problematic questions of privacy. It has never been easier to orientate yourself in the world, but by the same token it has never been harder to hide.

A to B, not A to Z

Cartographer Denis Wood has estimated that 99.9% of all maps ever made were made in the 21st century. Surrounded as we are by cartography on every website and smartphone, it’s easy to forget how unnatural such a God’s-eye view is, and what a fiendish business mapmaking must have been before aviation or satellites.  Browsing through the antique maps in the basement of Stanfords travel bookshop, you will come across maps charting the road from, say, London to Land’s End (this 1675 example is by the acknowledged master John Ogilby).

Laid out in scroll-like sections, ignoring all but the road and key waypoints or landmarks, these linear itineraries are reminiscent of an ancient oral tradition in which all that mattered was that you walked for ten miles until you came to a church with a spire, then took the right fork and proceeded for fifteen miles, and so on.  In a sense, the GPS tom-tom systems that now come as standard in cars are navigating us back towards this tradition: 21st century drivers can progress from origin to destination without picking up much sense of the surrounding area and its topography.

The “songlines” of traditional Aboriginal Australian culture worked on a similar basis, allowing nomadic peoples to navigate across vast distances by reciting equally vast song sequences containing topographical pointers. “A song,” as Bruce Chatwin puts it in his 1987 bestseller The Songlines, “was both map and direction-finder. Providing you knew the song, you could always find your way across country.” It’s hard not to be seduced by the idea of music doubling as a map – a notion that evidently appeals to Tom Waits, who likes to cover the walls of the recording studio with maps, and whose song ‘Don’t Go Into That Barn’ (from 2004’s Real Gone) accurately describes the progress of a slave boat down the Mississippi:

Dover down to Covington
Covington to Louisville
Louisville to Henderson
Henderson to Smithland
Smithland to Memphis
Memphis down to Vicksburg
Vicksburg to Natchez…

If a musical score – significantly called a “chart” in musicians’ parlance – provides a visual, spatial representation of sound (an idea explored literally in James Plakovic’s World Beat Music), these musical maps do the reverse, rendering spatial relationships as time-bound, linear narrative.

X marks the spot

Primarily, though, the geography embedded in the Waits song, or in the songs of the Pogues, adds colour and connects them to a folk tradition of storytelling.  Folk music, born of local communities and propagated by travel, is naturally fixated on place and landscape; it’s no coincidence that the folk music of the American South is called “country”.

Of course, music isn’t the only art to employ geography in the service of imaginative richness or naturalistic heft: the history of English prose fiction has long been intertwined with our passion for maps.  Defoe’s sequels to Robinson Crusoe (1719-20) and Swift’s Gulliver’s Travels (1726) were not only influenced by nonfiction travel narratives; they were also accompanied, from their earliest editions, by illustrations mapping Crusoe and Gulliver’s journeys to far-flung and fantastical landscapes.  Plenty of novelists have followed this early lead by incorporating maps to the point where they are an integral, inseparable part of the work.

The standout examples of this “peculiarly British habit” (as Mike Parker calls it in Map Addict) must be the maps of Middle-Earth produced by J.R.R. Tolkien and son, and Robert Louis Stevenson’s map for Treasure Island (1883). In Stevenson’s case, intriguingly, the map actually came before the novel – born of an escapist impulse on a wet Scottish holiday.  As his stepson later recalled:

busy with a box of paints I happened to be tinting a map of an island I had drawn. Stevenson came in as I was finishing it, and with his affectionate interest in everything I was doing, leaned over my shoulder, and was soon elaborating the map and naming it. I shall never forget the thrill of Skeleton Island, Spyglass Hill, nor the heart-stirring climax of the three red crosses! And the greater climax still when he wrote down the words “Treasure Island” at the top right-hand corner! And he seemed to know so much about it too – the pirates, the buried treasure, the man who had been marooned on the island… “Oh, for a story about it”, I exclaimed, in a heaven of enchantment…

The map begat the story, which begat scores of imitators borrowing the “X marks the spot” trope, which between them begat countless boys with map fixations who drove their parents mad by digging up the garden.  Myself included.

If, by the advent of modernism, the endpaper map had largely been consigned to fantasy and children’s literature, the novel had not lost its geographical fixation. James Joyce’s Ulysses (1922) and Virginia Woolf’s Mrs Dalloway (1925) are modernist examples of what would now be called “psychogeography”: the contours of consciousness overlaid on those of the city. (Vladimir Nabokov insisted that when teaching Ulysses, “instructors should prepare maps of Dublin with Bloom’s and Stephen’s intertwining itineraries clearly traced.”)  And there is a striking correspondence between Jack Kerouac’s On The Road, whose original 1951 draft was notoriously typed on a single continuous scroll of paper, and John Ogilby’s “A to B” road map shown above.

If these novels are in some sense aspiring to the condition of maps, they are the inverse of the maps produced by eccentric American cartographer Denis Wood.  Interviewed by Ira Glass for This American Life, Wood described his ongoing efforts to achieve “a poetics of cartography” through the mapping of his neighbourhood of Boylan Heights in Raleigh, North Carolina.  Each of Wood’s maps is concerned with one specific variable, from the network of overhead power lines, to the pools of light cast by the street lamps, to the Halloween pumpkins on the neighbours’ porches.  The aim of this quixotic, consciously futile project is to document every sensory detail of the place where he lives, constructing an atlas that amounts to an attempt to – as Glass put it – “write a novel with pure symbols.”

A map of Nowhere

In truth, the relationship between maps and prose writing goes back before the origins of the novel as we know it.  Thomas More’s Utopia, printed in 1516, carried this illustration of its ideal island (colours added later):

The island’s skull shape is a reminder of the link made by maps between physical landscapes and the landscapes of consciousness and the imagination.  The castle occupies pride of place in the middle of the brain - just about where Jerusalem sits on medieval world maps.

"Utopia" comes from the Greek for "nowhere", so this map poses something of a paradox: surely the defining feature of most maps is that they depict somewhere?  But the desire to map Utopia (in whatever form it takes for the mapmaker) constitutes a powerful strand of cartography in itself, from fantasy fiction to futuristic architects’ plans.

As Oscar Wilde put it, “A map of the world that does not include Utopia is not even worth glancing at”. There is something wonderfully hopeful in the act of representing a wished-for world on a map: as if the mapmaker’s instruments could simply conjure it into being.  Such plans of Utopia represent a buttress against the chaos of the world, a strategy for taking control that combines both rationality and imagination.  In this they resemble maps of all shapes and sizes, and from all shades and scales of “reality”.

Like maps themselves, any representation of the world of cartography must be highly selective.In plotting my personal journey through this huge subject I have revealed my own bias: towards London and the arts in general; towards Tom Waits, Vladimir Nabokov and The West Wing in particular. Many thanks to Frank Jacobs for his help, and to the other industrious blogging types who have made the Web a map-hunter’s dream.

"I Love You, Phillip Morris", with context

Coincidentally, just as Roberta was poppin’ out her post on A Single Man, a friend took me to a preview screening of I Love You, Phillip Morris, in which Ewan McGregor and Jim Carrey go rather spectacularly gay. Having seen both movies and read Roberta’s lucid take on Tom Ford’s Fabergé egg of a movie, I would definitely say that Phillip Morris had the more distancing effect on me, artificial as A Single Man might be. And more, it made me think in context, not just in the (highly emotional) moment, and for that I’m grateful.

Phillip Morris is definitely not the better film. In fact, it’s markedly weaker; a bit confused, sometimes tone-deaf, witless at some crucial moments, and so thrilled by having bagged Carrey and McGregor that it doesn’t concentrate on giving their talents appropriate material or support. The elements of it that work, however, do so very well. In particular, I’m impressed that any plot twist involving AIDS could be funny, and not in the Curb Your Enthusiasm/coal-black schadenfreude idiom either. But more important than its internal failures and successes is the peculiar moment it illustrates - one where gays (whatever they really are) have truly gained a presence in mainstream, straight cinema.

After so long being either tragicomically marginalised (or thrown in with autism sufferers and the domestically abused as targets in the pinball-machine structure of an esteemed Hollywood career), gay men now sit as subjects in genres across the board. Recently we’ve had biopic (Milk), buddy/romance hybrid (Brokeback Mountain) and highbrow literary adaptation (A Single Man), all made with top-flight straight actors and marketed (successfully) well beyond their traditional, reliable-but-small audience of history-conscious gays like me. Oscars were mentioned, given, obviously withheld leading to mild outcry (remember Crash?) - this, readers, is progress.

Now, Phillip Morris isn’t up to those recent movies’ standards (even the turgid Brokeback is more involving), but it’s worth a go. More importantly, it shows off the complexity of this situation which I so lazily choose to deem a good one. For even given its political importance as a popcorn movie with primary-coloured advertising that also features Jim Carrey having graphic sex with a man, Phillip Morris runs close to some dangerous ground. These gays, should we fail to see them for what they are, are the campest one ever to take up so much Odeon screen space, and it’s a shame such extremes are needed. And given that the film throws its lot in with the gay-from-birth-or-before school of thought (again, in some ways an advance from it’s-his-dad’s-fault), it runs the risk of grafting campness onto homosexuality just as toothy smiles and watermelons were once tied to black Americans. Then there’s the thorny issue of AIDS, whose presence is never explained. It somehow arrives. Maybe that’s just what happens in this new gay world that Hollywood has “discovered”.

But the point is not to lose sight of the bigger picture. We’re here, we’re queer, and they already appear to be getting used to it, but the question remains: on whose terms does that acceptance come? It’s a question I certainly don’t feel confident answering. Any ideas, people?

- Andrew Naughtie

Trailer for “I Love You Phillip Morris

The shadow of our sorrow: mimesis not catharsis in A Single Man

I know it’s been a short while since A Single Man came out, and Colin Firth didn’t win the Oscar and all – though he did win a Bafta, give a brilliant speech, and look super-fly while he was at it.  But I’ve been thinking about this film ever since I saw it, and I’m only just working out why Tom Ford’s directorial début, though fantastic in many ways, didn’t quite do it for me.

By ‘do it for me’, I don’t only mean ‘make me cry,’ although in my lexicon the two are often synonymous.  I really mean that the ingredients of this film – an absorbing story, excellent performances from respected actors, gorgeous scenery and perfectly put-together shots – promised more of an effect upon the audience than they delivered.

I have read with interest the various reviews of A Single Man, and to generalise only a little, they have chiefly said that the film looks too much like an extended version of the advertisements which were Tom Ford’s cinematic training-ground.  This is somewhat true: there is no question that Ford’s stylised visual effects ratchet up the allure of what’s on screen very much in the style of an advert.

But I don’t, as some do, think that this accentuated aestheticism spoils the film because it is distracting; still less do I take the view of the Guardian’s David Cox, whose blog on what he called this ‘profoundly gay’ film was fairly plausible, until he tried to found his argument on the thesis that homosexual people ‘have a heightened appreciation of the look of things.’

It is my opinion that the extraordinary artistic aptness of A Single Man – everything and everyone on screen is lined up, lit up, polished up, from start to finish – means that the tragic plotline has its sadness neutralised.  The repression of the central character, George Falconer, who focuses obsessively on maintaining the appearance of normality after his partner’s untimely death, is reflected a little too well in the film’s insistence on rendering emotional chaos sensorily appealing.  There can be no catharsis at the end of A Single Man, because mini-catharses are happening all the time.

In this, the workings of the film remind me of a scene from Shakespeare’s Richard II. Deposed, slung in jail, and being hectored by his enemy, Richard has his misery soothed by a metaphor.  When the fallen king breaks a mirror and Bolingbroke tells him, ‘The shadow of thy sorrow hath destroyed / The shadow of thy face’, Richard answers, ‘Say that again. /  “The shadow of my sorrow?”  Ha, let’s see.’  He is so busy being delighted by the appositeness of what Bolingbroke has said that he forgets to be sad.

Just so, Tom Ford is the Bolingbroke to his audience’s Richard II; but we, unlike that unlucky monarch, and unlike the bereaved George, do not need our grief neutralising.  Ours is not real grief: if it’s diluted by much more than the fact of its fictiveness, we cease to feel it altogether.

There is a measure of excellence in Tom Ford’s decision to replicate at the level of the film itself the behaviour of its central character – but such bravura mimesis comes at the cost of the audience’s true emotional engagement.  We can only feel the shadow of our sorrow, and A Single Man, though it is an impressive and occasionally affecting film, is ultimately the poorer for it.

posted by Roberta

Ben Franklin and Newspapers in the Cloud
A sociological look at chatroulette, by Sarah Sternberg
Biodiversity on the rocks: what’s a conservationist to do? By Alex Cagan
Is there anybody in there? Jonathan Webb reports from science’s erstwhile final frontier: consciousness.
Lights, Camera, Inaction: public debates and the erosion of authentic engagement. By Abigail Jones
Dave on the make: Trevor McDonald meets David Cameron
WMDs on the WWW.
Lies of the Land: Edward Randell explores the art of mapping
"I Love You, Phillip Morris", with context
The shadow of our sorrow: mimesis not catharsis in A Single Man


A gnome is:

a) a maxim which imparts knowledge, often taught to the young
b) a legendary dwarf

gnome online is:

Ossie Froggatt-Smith works 9-5 as an editor and sometimes a journalist. He studied Byzantium and still thinks about it all the time. He manages gnome.

Edward Randell is a journalist. He sings in Paris with the Voice Messengers, and has written for the TLS and Jazzwise. He edits gnome.

Roberta Klimt spends a lot of time at the British Library, so much that she gets paid for it. She blogs and writes at gnome and the Oxford Left Review. She also studies medieval Italian literature.

Andrew Naughtie studies sociology. He lives in Bristol, but is moving to Chicago, Illinois! He blogs and writes at gnome.

At gnome, we get together with people and promote non-fiction writing because we think that:

a) young non-fiction writing is v v under-appreciated
b) writers are discouraged by brutal subs and celebrities

If you would like to:

a) suggest some useful links/blogs/events to gnome
b) write a non-fiction essay to be produced by gnome

we would love to hear it.

Submit by clicking submit, or get in touch, by emailing gnome [dot] magazine [at] googlemail [dot] com.