Saturday, January 12, 2008

The Pearly Gates of Cyberspace

The rise of online social networks continues unabated. Myspace has upward of a quarter million members. Facebook boasts another 60 million members storing 1.7 billion photographs. Meanwhile thousands of smaller sites have started to fill user niches to connect and talk about the most abstruse topics imaginable. All these sites promise a new cyberspace arena of equality and friendship.

In her book “The Pearly Gates of Cyberspace: A History of Space from Dante to the Internet” Australian philosopher Margaret Wertheim looks at the parallels between cyberspace and religion. As the subtitle suggests, the book looks at the way space has been treated through the centuries. She tracks how the “space” humans found for the religious soul was eventually squeezed out by science and the Enlightenment. Not until the advent of the internet and cyberspace does the soul have room to move again. The digital domain, argues Wertheim, is an attempt to realise a non-physical space for the ancient Christian concept of Heaven.

Wertheim quotes the similarities Umberto Eco saw between modern America and the last years of the Roman Empire. In both places strong government and social polity collapsed leaving society open to rupture and fragmentation. In Roman times, the feudal lords filled the gap, in modern times it is corporations. In Roman times the Christian religion emerged to fill a spiritual void. Today, Wertheim argues, cyberspace with its repackaged idea of Heaven fulfils the same need in a secular technological format.

The term “cyberspace” was invented by William Gibson in his 1984 sci-fi novel Neuromancer. For Gibson cyberspace was a “bodiless exaltation”. Others hailed the promise of cyberspace’s transcending possibilities. Psychologists suggest the disembodiment of the Internet appeals especially to young boys who are going through awkward physical transformations. The metaphor “surfing the net” conveys a real-world image of prowess and grace. Meanwhile the old Sanskrit word “avatar” (meaning the incarnation of a higher being) offers a whole new anonymous bodily experience in cyberspace in such domains as Second Life in what Wertheim calls an “outpouring of techno-religious dreaming”.

Wertheim traces the loss of a physical soul-space to the end of the Middle Ages. In his Divine Comedy, Florentine poet Dante Alighieri documented a trip to Hell, Purgatory and Heaven accompanied by Roman poet Virgil. This was a voyage through the Christian soul. Wertheim described it as a map of soul-space. Hell was in the centre of the earth (“abandon hope, all you who enter here”), Purgatory was an island in the southern hemisphere where people waited to be judged and Heaven was in the stars where the just ended up. Dante and Virgil wandered through all three kingdoms in a spiritual journey. All three are real places for Dante, even if it is only through his imagination he describes them.

Wertheim traces the victory of physical space to a contemporary of Dante, Giotto di Bondone. Giotto's mastery of perspective paintings gave the physical world a new splendour. His paintings allowed viewers to look through the physical space behind the picture plane. He created a virtual world for the subjects of his paintings. They reflected the changes in the Western zeitgeist that represented the world in a way that had no room for the ineffable Christian soul-space. In science Roger Bacon promoted the physical realm showing how it would lead to improvements in agriculture and medicine and could prolong human life.

The twofold reality of the Middle Ages was now under threat. Nicholas of Cusa laid out a new cosmology which unbounded space and insisted the Earth was not at the centre of the cosmos. Fifty years later obscure Polish priest Nicolaus Copernicus used his ample spare time to disprove the ancient Ptolemaic celestial clockwork model. Johannes Kepler took Copernicus’s work to demolish the distinction between celestial and terrestrial space. Everything in the sky, Kepler postulated, worked to the same natural physical laws as on Earth. Galileo proved Kepler right with his observational science. Descartes then gave infinite space a philosophical imprimatur that left no room for “God” to exist. It was up to Isaac Newton to synthesise the work of Copernicus, Kepler, Galileo and Descartes to state a universal law of gravitation that worked for celestial bodies as much as earthly ones.

In the 20th century, space became more relativistic. Edwin Hubble made two great findings: he found there were multiple galaxies and then he made the devastating discovery they were all moving away from each other. The universe was expanding. Hubble was uncomfortable with his own findings and the almost religious conclusion the universe had a beginning and an end. Einstein found a mathematical model to explain the “general theory of relativity”. The velocity of light was the same relative to everything. This meant space and time were not absolute. The theory explained gravity, which was a by-product of the shape of space. Einstein’s theory gave way to hyperspace and its postulation of ten or eleven dimensions of space, string theory, and the possibility of a theory of everything. In the new cosmology mathematics was all; anything that couldn’t be expressed in numbers did not exist.

Computers thrived on numbers (especially 0s and 1s). The Internet is growing at an astonishing rate. Written in the late 1990s, Wertheim’s “Cyberspace” talked about an expansion of a thousand host computers to 37 million in 15 years. Blogs were unheard of 1997 now there are 70 million or more. Cyberspace is a new place to socialise and play for billions of people. Most importantly to Wertheim, it is not subject to the laws of physics. It therefore returns us to a dualistic theory of reality. People create new identities which has the potential to split the self into a radical multiplicity. It is a new form of “self space” which exists independently of physical space.

In the final chapter, Wertheim returns to the religious theme of the title. She quotes a Virtual Reality guru Kevin Kelly who said the internet was a syncretic version of Christian ritual. In cyberspace, bodies do not age and perfection is possible. Sci-fi writers maintain cyber-immortality is possible. Freed from the prison of our bodies, our cyber-souls will soar into “the infinite space of the digital ether”. Yet it is the all-too-human communal aspect that gives the social network sites their attraction. The Internet becomes an “electronic agora” where people rebuild the communities lost in mall culture. It is a communal inner space of humanity’s own making.

Thursday, January 10, 2008

Magic Bullets v Encoding /Decoding: A comparison of media research theories

This post will define and place in historical context two media research theories: the “magic bullet” (“hypodermic”) model and the encoding/decoding model. It will then compare the theories in terms of their treatment of the audience and their relationship to power structures.

The earliest media research occurred in the 1920s and reflected the insecurities and paranoias of the time. In response to the pessimistic “mass society thesis” of the Frankfurt School, early researchers believed communication worked by injecting powerful messages into the minds of passive audiences. Theorists saw how both sides in the First World War used propaganda to such great effect that it acquired “a reputation of omnipotence”. The evolution of propaganda in radio and cinema encouraged Laswell and others to view mass media as having a powerful influence on public opinion. According to this notion, the media fired off “magic bullets” which merely had to hit their target to produce the desired effect. The theory was also called the “hypodermic model” which imagined the media as syringes able to “inject” narcotic propaganda directly into the veins of the audience. But by the 1940s and 1950s, researchers were beginning to rebuff the “pessimistic” thesis saying it proposed too powerful and unmediated an impact by the media. They began to look at the active role of the audience in the making of meaning.

By the 1970s, alternative ideas were taking shape which placed communication in a wider socio-cultural context. Stuart Hall’s ‘encoding/decoding’ model emerged out of a British semiotic and cultural studies approach to media studies. Semiotics is a textual analysis of how meaning is created in language, non verbal codes and cultural “signs”. Audiences were no longer “empty vessels” to pour meaning into, they were considered culturally formed and situated. Hall called the phases of his model “moments” which he defined in Marxist terms as “production, circulation, distribution /consumption, reproduction”. Each “moment” played a role in the manufacture of meaning. Audiences could accept, negotiate, or reject the media texts they were presented with. Researchers now had a model to show what audiences did with media as opposed to what media did with audience.

A key difference, therefore, between the hypodermic and encoding/decoding models is how they view the role of the audience. Effects theorists believed that audiences were passive and culture could be imposed from above by an elite group of skilled manipulators. While the hypodermic theory has a psychological plausibility, it fails to take into account that communication is not just about sending messages, it is about the sharing of meaning. Hall’s model offers better insights into the way culture operates by demonstrating how various actors change communication at each step in the process. The model emphasises the discourse rather than the participants and examines how each of the “passage of forms” in the process flow provide meaning to the message. Encoding/decoding encouraged research to move away from media effects towards audience talk and increased focus on communities, particularly marginalised ones. Yet although the hypodermic theory is now academically discredited, it continues to have resonance in the wider community especially in the context of violent media effects on children. It also underpins the one-dimensional nature of the media’s own ratings measurement research.

The hypodermic and encoding/decoding models are also subtly different in their treatment of power structures. The simplistic linear Shannon-Weaver model of communication which fits the earlier theory is superseded by Hall’s more complex diagram of frameworks producing meaning which are mediated by the discursive elements of the broadcast programme. Whereas the earlier theory proposed a simplistic but all powerful media, the encoding/decoding model suggested a sophisticated hegemony existed which operated through popular culture. Hall demonstrated how the “professional codes” of the media integrate with the cultural and social order to become a bulwark of the power structure. Hall showed that power in communication is more likely to be transmitted by signification than by syringe.

Wednesday, January 09, 2008

Rating the ratings: a study of TV audience measurement

Ratings are the central means of mass audience measurement for the television industry. This post will define what ratings mean and then examine positives and negative aspects of ratings use. It will examine critical US research in the 1990s and conclude with a discussion of the ratings transition debate in Australia earlier this decade.

Ratings provide a quantitative measure of how many homes or people are viewing a program, advertisements, a station or the media itself. They are based on an audience snapshot using both geographical (multi-stage cluster) and characteristic (stratified) sampling techniques. Because of their feedback element, ratings largely control what is broadcast. The television industry uses audience ratings information to justify its broadcasting service performance as well as the cost of advertising spots and sponsorship deals. The ratings approach is based on “exposure” which measures a single audience behaviour: “open eyes facing a medium”. When counted and analysed, exposures allow the industry to predict audiences and pre-sell slots to advertisers. Ratings for a programme are compared against others in the same point in time to determine audience share. Ratings, therefore, create a manageable image of the public for television executives. It is the apparently neutral form of numbers that invest the ratings with so much power.

But there are issues with this blunt approach. Because ratings drive advertising revenue, broadcasters tended to treat audiences as commodities on the basis of their viewing consumption. The economic system of commercial television depends on the extraction of surplus value from an exploited audience. The pressure is therefore on broadcasting decision makers to pander to mass markets in order to continually win large portions of the audience share. Ratings are kept high by sticking to proven formulas. Therefore risk taking is rare because radically different programming may shift audiences in the opposite direction than intended. This means that specialist interests such as the poor, the aged, the intelligentsia and children are not catered for by commercial broadcasters because they know will get more advertising revenue from mass audiences. As a result, innovative programming remains the preserve of publicly funded broadcasters who are not as bound to ratings.

Audience ratings have been criticised from a qualitative perspective. Because exposure is the only data recorded for ratings, the industry only cares about the numbers involved in tuning in, staying tuned, changing channels and turning off. No other audience behaviour is relevant. This means ratings do not capture whether a programme is interesting to its audience. It also means that low ratings problems are generally solved by programming decisions rather than by audience research. Ratings, in effect, “take the side” of the broadcasters. Audience ratings measure only if the message is received and do not capture whether it has been registered or internalised. Broadcasters are not interested in the “lived reality behind the ratings”. The only problem that matters for broadcasters is how to get the audience to tune in.

In the 1990s, researchers such as Eileen Meehan and Ien Ang began to criticise the way audiences were manipulated by the ratings. Meehan noted that intellectuals simply didn’t count in decision-making due to their tiny numbers. TV programming reflected the “forced choice behaviours” of the masses. Ang argued the media didn’t want to know about their audience, but merely prove there was one. Ratings produce a ‘map’ of the audience which provides broadcasters and advertisers with neatly arranged and convenient information. This allows the industry to take decisions about the future what Ang calls a sense of “provisional certainty”. There inevitably follows the streamlining of television output into formulaic genres, the plethora of spin-offs and the rigid placing of programmes into fixed time slots. In a competitive environment each competitor will make its product more like the others rather than taking a chance on producing something different. The result is a remorseless repetitiveness at the heart of the American TV schedule.

The Australian broadcasting industry has also endured controversy as a result of ratings issues. In the early 2000s the apparent certainty of measurement provided by ratings was undermined by a change in the ratings regime. In 2001, OzTAM (and its Italian supplier ATR) won a lucrative contract to replace incumbent provider ACNielsen to provide Australian metropolitan TV ratings. Despite both parties using the same “people meter” technology in the six month overlap period, major discrepancies emerged between the two providers’ data. The discrepancies led to widespread unease in the $5 billion Australian TV industry. ATR boss Muir said the discrepancy was because that ratings were sample-based estimates and therefore subject to sampling and statistical error. But advertisers did not want to hear about sampling errors or issues with “psychological makeup” that a rating system cannot capture. What they wanted was certainty for their business decisions and demanded an unrealistic 100 per cent ratings accuracy. Eventually the two measurement systems came closer together to ease the fears of the advertisers. Nonetheless the controversy exposed the gulf between the questionable accuracy of ratings and the absolute faith put in the system by advertisers.

Tuesday, January 08, 2008

Labor says no to Stolen Generations compensation

The Australian government has ruled out compensation for members of the Aboriginal Stolen Generations despite pleas from Indigenous leaders and the recommendations of an enquiry into the issue. Indigenous affairs minister Jenny Macklin said Labor would not be creating a compensation fund. She said the way forward was to tackle the prevailing problems in the community, such as the massive gap in life expectancy between Aborigines and the rest of Australia. "The point of the national apology really is to provide a bridge of respect between indigenous and non-indigenous Australians.” She said.

The government statement runs counter to the view stated by Labor senators on the Stolen Generations enquiry who recommended a tribunal be established to look at reparation and monetary compensation. Democrat Senator Andrew Bartlett has asked what has changed their mind in Government and calls Macklin’s statement the most “unequivocal rejection” Labor has made on the issue in recent years. Bartlett said it was counter-productive to issue an apology without considering the other unimplemented recommendations of the enquiry.

Stolen Generations Victoria agrees a fund is needed. Chair Lyn Austin called on victims to consider suing the government if it failed to establish a compensation fund. She compared the Stolen Generations to victims of crime who get paid compensation "You are looking at the gross violation and the act of genocide and all the inhumane things that have happened to our people,” she said. "We are actually thinking that ourselves, myself and another five siblings that were adopted into a family, are considering a class action.”

The controversy occurs as the Prime Minister Kevin Rudd prepares to honour an election commitment by reading out an apology when federal parliament resumes next month. In 1997, the 700 page 'Bringing Them Home' report found the removal of Aboriginal children was a gross violation of human rights and that forcible removal amounted to an act of genocide. The then Howard government refused to consider compensation instead deciding to provide welfare measures to lift health and education standards for Australia's native communities.

Howard also refused to apologise for fear of opening the door to compensation claims. However as formal apologies were read out in various state parliaments, the pressure mounted on Howard to respond. In 1999 he drafted a motion expressing "deep and sincere regret" over the issue and called the stolen generation "the most blemished chapter" in Australia's history. However his government also called into question whether in fact the “stolen generation” actually existed saying there weren’t enough children taken to warrant the description.

This claim was hotly denied by historians and the Indigenous community. Aboriginal leaders had demanded a billion dollars in reparations for at least 100,000 children who were forcibly placed in orphanages or foster care between 1870 and 1967, with the intention of assimilating them into white Australia. The Bringing them Home report stated that almost every Aboriginal family in the country was affected by the policy.

In its address to readers, the report didn’t just expose the abuses of the past. It also asked that “the whole community listens with an open heart and mind to the stories of what has happened in the past and, having listened and understood, commit itself to reconciliation.” In this, the report was calling for an active ethical engagement on the part of its readers to become involved in the justice process by acknowledging the loss and harm that had been done to the Indigenous community. This latest decision from the newly installed Labor government shows Australia remains a long way short of that commitment.

Monday, January 07, 2008

Saudi blogger Fouad al-Farhan imprisoned without charge

The father-in-law of detained Saudi Arabian blogger Fouad al-Farhan has visited him in a Jeddah prison where he has been held for 27 days without charge. It was the first time Saudi authorities had allowed access to al-Farhan who runs a blog that discusses local political and social issues. The National Society for Human Rights (NSHR) has cited Saudi law saying he should now be given access to a lawyer. While authorities have not outlined the reason for his detention, the Saudi interior ministry said al-Farhan was being held for "interrogation for violating non-security regulations".

The Committee to Protect Journalists (CPJ) wrote a letter on 2 January to King Abdullah protesting against al-Farhan’s month-long detention without charges. The CPJ found it “deplorable that Saudi authorities would continue to hold our colleague in near secrecy after nearly a month” saying it violated the most basic norms for free expression. They also said the detention ran counter to official Saudi statements in support of reform and a more open press. They urged the king to “use all his influence” to ensure al-Farhan’s release.

Fouad al-Farhan took the dangerous step of refusing to hide behind a pseudonym. The 32 year old al-Farhan blogs in Arabic at “AlFarhan” and has also posted in the English-language group blog Good Morning Jeddah (though not since 2006). He describes his mission as "searching for freedom, dignity, justice, equality, public participation and other lost Islamic values.” The site has now been transformed into a campaign for his release. AlFarhan is one of the most widely read blogs in Saudi Arabia.

Fouad al-Farhan is one of the pioneers of Saudi blogging. He was born in 1975 in Taif, in western Saudi Arabia, and received his higher education in the US. Al-Farhan has been a reader of blogs since the beginning of the blogging revolution. With his own blog he wanted to express his “freedom, ideas, and hopes, publicly”. But he did it too publicly for authorities’ liking. On 10 December al-Farhan was detained by Saudi security agents at the Jeddah office of the IT company he owns. Security agents later visited al-Farhan’s home and confiscated his laptop. It is not known where he is being held and police have provided no reason for his arrest.

Al-Farhan was expecting to be arrested. He sent an email to friends in the week prior to his arrest where he said he received a phone call from the Saudi interior ministry. The call said to prepare himself “to be picked up in the coming two weeks” for an investigation by a high-ranking official. Al-Farhan wrote in the email why he thought he was about to be arrested: “The issue that caused all of this is because I wrote about the political prisoners here in Saudi Arabia and they think I’m running an online campaign promoting their issue.” The e-mail is now posted on his blog.

Al-Farhan is the first Saudi blogger to be arrested. There are an estimated 600 bloggers in Saudi Arabia writing in English and Arabic. They are male and female, conservative and liberal and mostly blog anonymously. The biggest problem writers in Saudi Arabia face is the country’s conservative religious establishment. They act as a powerful lobbying force against progressive coverage of social, cultural, and religious matters. Actors in this field include official clerics, religious scholars, the religious police, radical revivalist preachers, and their followers. Government officials will appease this religious constituency by dismissing editors, blacklisting dissident writers, ordering news blackouts, and admonishing independent columnists to deter undesirable criticism especially over religious issues. This is price Saudis pay for the long-term alliance between the ruling Al-Saud family and followers of the 18th-century cleric Muhammad Ibn Abdel Wahab, whose strict teachings form the basis of the country’s official Wahhabi doctrine. The Al-Sauds wield political power while the Wahhabi clergy provide spiritual authority in return for bestowing legitimacy to the Al-Sauds rule.

The result is that Saudis are free only to speak about religion and politics in non-Saudi publications or other venues. Because public gatherings and political parties are banned, Saudis need to be creative in order to speak candidly about the administration or the religious authorities. Therefore they discuss issues in venues such as private homes, in salons or discussion groups known as “diwaniyas”, in coffee shops, on satellite television, or increasingly, on blogs.

With its population of 23 million, Saudi Arabia has one of the highest internet penetration rates in the Arab world. Blogging provided a platform for women to criticise their male-dominated society. Meanwhile the now defunct Religious Policeman written by an anonymous Saudi living in Britain, exposed the hypocrisy of the kingdom's Commission for the Promotion of Virtue and Prevention of Vice, (religious police) who enforce the country's strict Islamic moral code. Unsurprisingly his blog was banned in his homeland. Appearing on Reporters Without Borders “internet enemies” list, Saudi Arabia’s censorship is rife and targets pornographic content, Israeli publications and homosexuality as well as opposition websites.

Saudi political censorship is in direct contravention of Article 19 of the UN's Universal Declaration of Human Rights which that all people have the right to freedom of opinion and expression including the right "to seek, receive, and impart information and ideas through any media and regardless of frontiers." Unsurprisingly the autocratic Saudi regime prefers the out clause that Islamic nations put into the 1990 “Cairo Declaration on Human Rights in Islam” which states that “everyone shall have the right to express his opinion freely in such manner as would not be contrary to the principles of the Shari’ah”. As Fouad al-Farhan has found out, there but for the grace of god goes free speech in Saudi Arabia.

Saturday, January 05, 2008

Kenyan violence affects aid programs

Up to 100,000 people are facing starvation in western Kenya due to election-related violence. While the leaders continue to be in stalemate in Nairobi, the World Food Program (WFP) has warned that 100,000 people in the Northern Rift Valley are in “critical need of food”. This part of Kenya has seen some of the worst violence including the burning of the church in Eldoret that killed 35 people seeking refuge. Trucks carrying WFP food remain stranded have been prevented for days from entering western Kenya because of insecurity. The violence is also affecting shipment of WFP food to Uganda, southern Sudan, Somalia and the eastern Democratic Republic of Congo.

Vigilantes have set up checkpoints and transporters in Mombasa refuse to move trucks out of the port without escorts. Fuel shortages have meant that UN humanitarian flights to Somalia carrying aid workers and cargo such as medicine have also been cancelled. Meanwhile in western Kenya the UN estimates 180,000 people have been displaced by the unrest which has officially killed 360 people so far. The violence flared up after the disputed election result which saw incumbent president Mwai Kibaki re-elected at the expense of opposition leader Raila Odinga.

Raila's party has demanded fresh presidential elections but may accept a coalition arrangement. "This is about a democracy and justice," said Anyang Nyongo, secretary-general of Odinga's Orange Democratic Movement. "We shall continue to defend and promote the right of Kenyans so that the democratic process should be fulfilled.” Kibaki has now said he will accept calls for a rerun of a disputed election but only if a court orders it. Kenya’s High Court could annul the vote as illegal, which would force a new vote. South African bishop Desmond Tutu held talks with both men last week and said they were both “open to the possibilities of negotiations."

For that to happen, however, the violence needs to stop. There are signs things are slowly coming back to normal. The overnight death toll on Friday was fewer than 10, compared with the more than 300 people killed in a few days earlier this week. The security ring around Nairobi was relaxed overnight allowing free movement of commuter buses, private vehicles and pedestrians for the first time since Sunday. Hundreds of Odinga’s supporters attempted to protest in the capital yesterday but were forced to disperse by paramilitary police using tear gas.

Many media accounts have focussed on the ethnic dimension in the violence. Kibeki and his hardcore supporters belong to the Kikuyu group while Odinga draws his support from the Luo group. The majority of the victims so far are Kikuyus who make up 22 per cent of the population and are Kenya’s largest ethnic group. However the Foreign Policy Watch blog warns that this aspect should not be exaggerated. They point out that Kikuyus and Luos (as well as other groups) cooperated in the National Rainbow Coalition that brought Kibaki to power in the first place. It places stronger blame on the endemic corruption at the heart of Kenyan society.

When Kibaki was elected in 2002, there were great hopes that Kenya had turned the corner and that he would be the man to reform the nation. But they proved to be false hopes. The opposition to Kibaki this time was especially intense among the poor jobless youths who had voted overwhelmingly for change. In their view, a ruling clique that had stolen billions of dollars in a period of five years had stolen the elections. According to Horace Campbell, this verdict was obscured by ethnic alienation and the constant refrain “that the crisis and killings emanated from deep 'tribal' hostilities.” Campbell argues this won’t change until there is a break from “looting, extra judicial killings, rape and violation of women, and general low respect for African lives.”

The poor state of Kenya’s prison system is symptomatic of wider ranging social, political, judicial and economic stagnation in the country. Prisoners are subject to dehumanising conditions and the remand system can leave the accused waiting for years for a trial. At Kamiti maximum security prison space is so tight that if one prisoner turns while sleeping, all must turn. Prison officer David Mwania said the situation was common in all Kenya jails. Mwania said the problems were due to lack of funds to provide for basic essentials for inmates. “Simply, the system cannot cope anymore,” he said.

Friday, January 04, 2008

Philippines: Estrada to ride into the sunset one more time

Convicted former Philippine president Joseph Estrada is planning a comeback, but he claims it is only as a movie star. He is planning a starring role in a film opposite one of the Philippines' leading comedy actresses saying he needs the money. Though still the defacto opposition leader, he has brushed off speculation he plans to return to politics. Estrada was released after six years in prison when granted a presidential pardon two months ago.

Estrada made his name in over a 100 movies before switching to politics, becoming vice-president to Fidel Ramos. Estrada won the 1998 presidential election in a landslide which was the biggest margin in Philippines history. Estrada lasted just 30 months in the top job, before he was accused of allegations of corruption and incompetence. He was impeached after a People Power protest. He was imprisoned for life in September 2007 after a six year trial only to be pardoned two months later by his successor Gloria Macapagal-Arroyo. Now his supporters are creating speculation Estrada will run again in the 2010 election. The current administration has warned him against it, saying the pardon specifically bans him from running again for the presidency.

Estrada’s overthrow in 2001 was the second people power revolution in the last 20 years and symptomatic of the Philippines torrid political history. The country was ruled for three hundred years by the Spanish in the wake of Ferdinand Magellan’s landing on the island of Cebu. Magellan himself was killed by local chieftain Lapu Lapu on the island on Mactan but the Spanish were undeterred and launched another assault from Mexico to claim the islands. They named Cebu the "Islas Filipinos" in honour of King Philip II.

The Spanish had two hopes, economic and religious, for their new south east Asian colony. Firstly, they hope to emulate the Portuguese who had turned the nearby Spice Islands (Moluccas) into a lucrative source of exotic spices. In this, they were to be disappointed. But they had better results with their second ambition. Spain hoped to convert the natives to Catholicism. They sent in a group of Augustinian, Dominican, and Franciscan missionaries collectively known as the Friars who carved out great secular power for themselves over the coming centuries. Their rule became known as the Friarocracy.

While the Philippines proved to be relatively spiceless, they discovered in the large northern island of Luzon an excellent deepwater port which could serve as a trade link between Asia and the Americas. Known as Manila, it quickly became the capital of the new colony. Galleons plied the route from Mexico and China exchanging gold and silver for silk. The Spanish fought off the Chinese, the British and the Dutch for control of Manila and ensured a compliant local population as Catholic priests established towns around churches and gave religious instruction.

By the early 19th century the galleon trade was in decline and Manila’s wealthier mestizos (descendents of the Spanish intermarried with locals and Chinese) began to turn their attention to the fertile lands of the highly volcanic islands. They grew sugar and hemp and as they grew rich, wanted a slice of the political power held firmly by Spanish officials and friars. Fillipinos educated in Europe came home resenting the power of the elite as well as noting Spain’s own backwardness compared to its European neighbours. Physician and writer Jose Rizal came home from Spain in 1892 and founded La Liga Filipina, intent on enacting political reform.

While Rizal wanted a peaceful transfer of power, others founded revolutionary societies dedicated to the violent overthrow of the Spanish. The Katipunan launched a bloody armed revolt in 1896 but was defeated by the stronger Spanish forces. Although Rizal was not involved, he was arrested and accused of insurgency before being executed by firing squad. His martyrdom inflamed Filipino passions and a stronger revolt launched in his name. Just when it seemed independence was on the cards, the Americans became involved, bringing their own brand of colonial rule.

In 1898 the US went to war with Spain over rights to Cuba. The war spread to the Pacific and US forces destroyed the Spanish fleet in Manila Bay. The Filipino rebels supported the Americans and took control of the country. In order to save face, the Spanish signed a secret deal with the US ceding rights to the Philippines for $20 million. The islanders had no say in the treaty and found themselves betrayed by their “allies”. The rebels continued to fight a war against their new rulers until they were defeated in 1902.

Nevertheless the Americans proved more benign rulers than the Spanish and encouraged Filipinos to become involved in their own political affairs. They wanted the Philippines to follow the US model of democracy. Under the Tydings-McDuffie Act, a commonwealth government was established in 1935 under the presidency of Manuel Quezon. Full independence was promised for 1946. But war intervened and the Japanese invaded Luzon just ten hours after attacking Pearl Harbor. They declared a puppet republic in 1943 under Jose Laurel. But General Douglas MacArthur made good on a promise to return a year later overturning the Japanese administration at a horrendous cost in Filipino lives.

In accordance with the previous agreement, Manuel Roxas was appointed first president of an independent Philippines in 1946. Landowners retained feudal control of the war-ravaged country and agrarian grievances triggered the Huk Rebellion which dragged on for almost two decades. Its end coincided with the rise to power of Ferdinand Marcos. Initially he was treated as a hero as he increased food production, financed public works and secured aid from the US. But as the tide turned against him in 1972, he declared martial law against what he saw as a Communist threat and a growing Muslim separatist movement in the south.

The Philippines settled into a pattern of corrupt and oppressive dictatorship, worsened by the assassination of popular opposition leader Benigno Aquino at Manila airport in 1983. Aquino’s funeral became the largest gathering in the nation’s history and ushered in a two year protest campaign that grew in intensity. Under pressure, Marcos declared a snap election in 1986. Aquino’s widow Corazon was put forward as opposition leader. Marcos’s handpicked scrutineers declared him the winner of a clearly fraudulent election. The announcement sparked four days of civilian unrest which became known as the EDSA Revolution (for the street Epifanio de los Santos Avenue which was the focal point of the demonstrations). Finally abandoned by his military, Marcos fled the country allowing Aquino to take power.

The 1990s began a new era of optimism which didn’t last long. Aquino’s regime was plagued by Marcos-supported coup attempts, leftist insurrections and constant electricity shortages. The massive volcanic eruption (the second largest on Earth in the 20th century) of Mount Pinatubo in 1991 was followed by the closure of the nearby American Clark Air Base which caused major economic hardship. Aquino was replaced by her military strongman Fidel Ramos who was elected president in 1992. Ramos surfed a wave of growth in the Asian economy until it collapsed in 1997. That market meltdown brought Estrada centre stage for the 1998 election.

Much of Estrada’s support came from the country’s poor who voted for him in record numbers. They remembered him from his movies as a defender of the downtrodden. But Estrada was unlike his screen persona. He did little to alleviate poverty and the economy stuttered. The negotiations that Ramos has instigated with Communist and Muslim rebels fell through. Then the corruption allegations began coming in 2000. An attempt to impeach him was squashed by his supporters. It was time for more people power. ESDA Mark II was launched as hundreds of thousands took to the streets. Once again, a president was deserted by his army and forced to stand down.

His vice-president and former opposition leader Gloria Macapagal-Arroyo was sworn in as president. Macapagal-Arroyo was born to rule. Her father Diosdado Macapagal was a one-term president who served from 1961 to 1965. His daughter gained a Ph D in economics at Georgetown University in Washington where she was a classmate of Bill Clinton. She served in Aquino’s government before running for senator in 1992 and rose to the vice presidency in Estrada’s election landslide. Macapagal-Arroyo cemented her presidency with a solid win in the 2004 election but under the Philippine constitution cannot run again in 2010.

Like Aquino before her, Macapagal-Arroyo has survived coup attempts, rebel activity and allegations of corruption. She is proving a solid financial manager. Economic growth has averaged 4.6 percent under her rule and inflation has been kept to under 3 percent. The Filipino peso was Asia's best performing currency in 2007 strengthening by almost 20 percent for the year. Analysts are now beginning to look at the 2010 campaign to replace her. Defence Secretary Gilbert Teodoro, Quezon City Mayor Feliciano "Sonny" Belmonte and Metro Manila Development Authority (MMDA) chairman Bayani Fernando are all early contenders. But Estrada can not be ruled out despite the official ban. His supporters are playing up his chances in the absence of a unifying figure among opposition candidates. Asked to confirm the report on Thursday, Estrada coyly replied: "That is just speculation right now, but who knows?"

Thursday, January 03, 2008

Warhola: Andy Warhol at Brisbane’s GOMA

According to Brisbane Gallery of Modern Art (GOMA), the Andy Warhol exhibition has been a big success with 36,000 people attending since it opened four weeks ago with big queues every day. The exhibition is Australia’s first Warhol retrospective and is exclusive to Brisbane. It is the largest ever loan of works from the Andy Warhol Museum in Pittsburgh and features 300 pieces spanning his career from the 1950s to his death in 1987. Woolly Days joined the queue today to check it out.

Warhol was a great exponent of hype in his lifetime and it was tempting to look at his art in the same light. But that does great disservice to his body of work. Warhol had incredible energy and crammed a great deal into his short 58 years. He was active across many areas of art including paintings, drawings, prints, sculptures, photographs, films, videos and installations. The GOMA exhibition celebrates all this with his cow wallpaper thrown in for good measure. Warhol’s understanding of mass culture, semiotics, and the power of the media is brought out brilliantly in this retrospective. His Pop Art celebrates the age of mass production in an unsettling fashion. Warhol’s work was the epitome of postmodernism.

The exhibition is organised chronologically in the main, beginning with his early commercial illustrations which include his extraordinary shoe drawings. It then moves on through his silkscreen works of the 1960s (with his trademark soup cans). We see his wonderful celebrity portraits including Marilyn Monroe and Jackie Onassis. These are part of his decidedly dark “Death and Disaster” series with its forensic examination of car crashes. The cropped images are taken out of a journalistic framework and placed repeatedly into the context Warhol wants for them.

Warhol’s work took some time to recover from the 1968 assassination attempt. The exhibition features his famous paintings of Mao Zedong with their rich overtones of the similarities between consumerism and communism (Warhol was also attracted to Mao’s personality cult). The contrast is also obvious between the iconography and commercialism in the religious pieces such as his Last Supper (his final painting in 1986).

Perhaps the most interesting work in the entire exhibition is the later semiological pieces including the dollar signs, crosses and the hammer and sickle series. Nevertheless the most popular items were his time capsules, thirty years of compulsive and compelling daily hoarding which presented fascinating insights to the man and his times.

Warhol was born Andrew Warhola in Pittsburgh, Pennsylvania on 6 August 1928. His Byzantine Christian parents Ondrej and Julia were ethnic Rusyns (Ruthenians) from Slovakia who emigrated to work in the mines in Pennsylvania. Aged 8 young Warhola was struck down with chorea (St Vitus Dance) a neurological disorder which led to blotchiness in skin pigmentation. He was a loner and a bedridden child who drew in bed and collected pictures of his favourite movie stars. Warhola studied commercial art at the School of Fine Arts at Carnegie Institute of Technology in Pittsburgh and then moved to New York in 1949.

His first job was as a commercial illustrator in which he was very successful. He did shoe ad jobs for I. Miller in a stylish blotty line. He also worked as an illustrator for several magazines including Vogue, Harper's Bazaar and The New Yorker. By the end of the fifties, he was one of the most sought after designers in New York. During this period he dropped off the final ‘a’ from his name to become Warhol.

He first exhibited in an art gallery in 1962, when the Ferus Gallery in Los Angeles showed his 32 Campbell's Soup Cans, 1961-62. Fascinated by mass production, Warhol tried to duplicate it in his art. In 1963 Warhol founded a studio at East 47th St known as “The Silver Factory” where he employed a cast of fashionable young people to help him work and play.

After Marilyn Monroe died in 1962, Warhol became infatuated by her and drew hundreds of drawings on the subject. During this period, he became associated with the movement known as Pop Art. Pop Art emerged in Britain and the US in the 1950s; its name derived from "popular mass culture". Some critics hated Warhol's open embrace of market culture. But Warhol didn’t care. As he became more successful, he branched out into other art forms, and became a key figure in New York’s underground art and cinema scene. He also brought Lou Reed and the Velvet Underground to world attention.

In 1968, Valerie Solanas, founder and sole member of SCUM (Society for Cutting Up Men) walked into the Factory, and shot Warhol. Earlier that day, Solanas had been turned away from the Factory after asking for the return of a misplaced script she had given to Warhol. The attack was nearly fatal and Warhol was in surgery for five hours. The attack profoundly impacted his work.

At the beginning of the 1970s, Warhol began publishing Interview magazine and renewed his focus on painting. Works created in this decade include Maos, Skulls, Hammer and Sickles, Torsos and Shadows and many commissioned portraits. By the end of the decade he was firmly established as a major 20th-century artist and international celebrity, and exhibited his work extensively in museums and galleries around the world.


In the 1980s, he created two cable television shows, "Andy Warhol's TV" and "Andy Warhol's Fifteen Minutes" for MTV in 1986. His paintings in this era include The Last Suppers, Rorschachs and, in a return to his Pop Art theme, a series called Ads. Warhol was admitted to hospital for routine gall bladder surgery but he died from complications a day later on 22 February, 1987. He was buried in Pittsburgh, and more than 2,000 people attended a memorial mass at St. Patrick's Cathedral in New York organised by his friends and associates. The Andy Warhol Museum was opened in Pittsburgh in 1994. It is the largest American art museum dedicated to a single artist, holding more than 12,000 works by the artist himself.

Wednesday, January 02, 2008

Radiohead declares war on the record industry

Radiohead front man Thom Yorke has hit back at record label EMI for suggesting the reason the band left the label because it refused to come to a $20 million deal. The spat comes in the wake of Radiohead’s independent release of its new album In Rainbows to huge acclaim. Speaking on the band’s website Yorke denied money was the reason for leaving EMI. “What we wanted was some control over our work and how it was used in the future by them,” he said “That seemed reasonable to us, as we cared about it a great deal”. Yorke also warned new bands not to sign a record contract that strips them of their digital rights.

The claims come as Radiohead finally releases the CD version of In Rainbows after releasing it three months ago online in what the New York Times called a ‘tip jar’ arrangement. The same article now questions whether anyone will fork out hard cash for the “material” version of the same product. But Radiohead have pre-empted the question by promoting the CD release with a “prerecording” of the band performing songs from “In Rainbows” on their own website and on Current Television as “Scotch Mist”.

This is extremely clever marketing by the band. On 10 October, Radiohead released In Rainbows as a digital download. For the next two months fans could download the album and pay whatever they wanted for it or indeed proffer no payment at all. The move caused a sensation in the music industry. No band as big as Radiohead had ever done such a thing. Besides the media hype, the music won the plaudits of critics as diverse as Pitchfork and Wired. The ploy also worked brilliantly from a public relations perspective, In Rainbows is the most talked about album of 2007. However the band has not released any figures about exactly how many downloads were made or how much people paid for them.

The band has recently celebrated its 21st anniversary. The original four members were all public schoolboys at Abingdon school in Oxford. Yorke (vocals), Ed O'Brien (guitar), Phil Selway (drums) and Colin Greenwood (bass) formed a band in 1986 called “On a Friday” celebrating the day on which the foursome got together to practice. Greenwood's brother Johnny soon joined on synths and guitars. But the band did not gain much momentum until after they completed their various degrees in the early 90s. They played in the indie scene in Oxford and eventually attracted the attention of the recording industry. They signed with EMI in 1991.

EMI’s first advice was to get the band to change their name. Singer Thom Yorke decided they would take their name from the Talking Heads’ song “Radio Head”. Radiohead’s first EP with EMI “Drill” in 1992 is now a sought-after collector’s item but at the time failed to break the British top 100. Their breakthrough came when their 1993 Creep EP reached number 2 in the Billboard Modern Rock Track Chart.

Despite this, the first album Pablo Honey (including the song “Creep”) failed to excite. While their 1995 second album The Bends got critical acclaim and some success, they had to wait until 1997 and the release of Ok Computer to really scale the musical heights. The album was a critical and commercial success and spent 71 consecutive weeks in the UK Charts. According to ‘reluctant fan’ Jon Lusk, the album captured the zeitgeist of the despairing-yet-hopeful dying days of the British Conservative administration. Others said the band had caught a wave of generational anxiety. Whatever it was, the album was quickly proclaimed one of the greatest albums of all time.

Radiohead found it difficult to work out what to do next after the plaudits of Ok Computer. They took a leisurely three years before they released Kid A. Without any proper marketing, Kid A went straight to number one. Fans were eager to soak up the new Radiohead experience. Critics raved; Pitchfork gave it ten out of ten. This new album was a stark, bleak experience and mostly synthesised. Brent DiCrescenzo perceptively called it the “sound of a band, and its leader, losing faith in themselves, destroying themselves and subsequently rebuilding a perfect entity.” But after the hype had calmed down, this perfect entity had infuriated as much as it inspired.

Amnesiac was released in 2001 comprising of additional tracks from the Kid A sessions. It too was successful but although displaying a more jazzy feel, it also suffered from the deep depressive attitudes that infused the earlier album. The more guitar oriented Hail to the Chief in 2003 represented in many ways a return to earlier form. Some critics were disappointed feeling that Radiohead had not evolved creatively with this new record. The band didn’t care. At the height of their fame, they launched a world tour headlining at Glastonbury and counting down the time to the end of their contract with EMI.

Finally free of the recording contract shackles in 2005, they began work on a new album. In an interview with Wired, Thom Yorke claimed EMI allowed no clauses for digital music rights as their contract was struck years before digital music stores were available. When In Rainbows was finally released in October 2006 it was the first real opportunity the group had to earn money on a digital offering. Yorke said the flexible price was manager Chris Hufford’s idea. “We all thought he was barmy. As we were putting up the site, we were still saying, "Are you sure about this?" But it was really good,” said Yorke “It released us from something. It wasn't nihilistic, implying that the music's not worth anything at all. It was the total opposite. And people took it as it was meant. Maybe that's just people having a little faith in what we're doing.”

Tuesday, January 01, 2008

A new regime of internet censorship in Australia

The NSW Council for Civil Liberties claimed today that Australia’s repressive new internet censorship laws will not stop computer-savvy children from looking at banned sites. Council vice-president, David Bernie called the plan “political grandstanding” and a “gimmick” which is being sold as protecting the public from pornography but will instead lull parents into a false sense of security. Bernie also said the legislation has serious implications for freedom of expression. “Only adults would be restricted by the filters," he said.

The new rules will come into force on 20 January and will restrict access to age restricted content (commercial MA15+ content and R18+ content) either hosted in Australia or provided from Australia. The framework will apply to most content service providers who supply content via a carriage service. Labor justified its new policy on the basis that the Howard Government’s proposed policy of providing free NetNanny software to all households who wanted it didn’t adequately protect children.

The onus will be on Internet Service Providers to provide so-called “clean feeds” and the program will be “opt out” which means that users can elect not to receive the censored feed. This puts the onus to act on those who don’t want censorship and is a change from Labor’s original position which was to make the policy “opt in”. Deliberately announcing the policy on New Years Eve (to minimise media scrutiny), new Telecommunications Minister Stephen Conroy claimed the scheme will better protect children from pornography and violent websites. He also said the Government will work with the industry to ensure the filters do not affect internet response times.

This latest clumsy attempt to regulate the internet by the newly elected Federal Labor Government has been roundly condemned in the internet community both inside and outside Australia. Blorge.com points out that Australia already has some of the most restrictive internet censorship in the western world with outlaws X-rated pornography, casino-style internet gambling, R-rated computer games, Bit torrents, and certain forms of “hate speech.” The new censorship broadens these wide laws to include “pornography and inappropriate material,” as well.

Techcrunch’s Duncan Riley said just how far “inappropriate material” may extend was not made clear. He offered the example that questioning Government policy about Aboriginal people could be deemed to be discrimination under Australian law and hence blocked by the censorship regime. He also suggested bloggers could be blocked if they allow inappropriate comments. He said the legislation meant Australia would be joining China as one of the few countries globally that broadly censor the internet. “If there is one certainty in any country that implements broadscale censorship, once they start blocking content it doesn’t stop,” said Riley. “And certainly every do-gooder group and special interest lobbyist will be wanting the Government to add to the list.”

Ars Technica said adults will have to give up a little privacy to opt out. Users will need prove their age by supplying their full names and either a credit card or digital signature approved for online use. Content publishers are also required by law to keep records of who accessed R18+ content and with what credentials for a period of two years. They wondered if the rules aren't a complete waste of time as Australia cannot enforce the rules in other countries.

These ineffective and wasteful laws are an example of “symbolic politics” on Labor’s part. It gives the impression they are active on an issue when they are not. The laws pander to hysteria and moral panics about access to internet for minors. It makes no cognisance that it is ultimately a parental responsibility for their children’s welfare and instead offers to cheaply assuage a guilt complex for not taking enough interest in their children’s affairs. The laws are also part of an ongoing battle between libertarianism and social conservatism.

But ultimately this is all about grubby dealmaking. By passing these new laws, Labor have one eye on conservative senator Steve Fielding whose support they will need to pass legislation in the senate post June 2008. The Family First senator also has his eye on Labor. Fielding, who has campaigned on ISP filtering, said he would be watching the government “like a hawk” on this issue. No doubt Labor will be wanting its pound of flesh on other big ticket items as a quid pro quo. In their minds, compromising freedom of expression on the internet is a small price to pay for getting an agenda through parliament.

Monday, December 31, 2007

Kenya media blackout greets disputed election result

The Kenyan government has ordered a broadcast media blackout in the wake of the riots that greeted the disputed election win of President Mwai Kibaki. The security ministry ordered the blackout “in the interest of public safety and tranquillity”. Journalists were also warned to stop broadcasting “inciting or alarming material” as supporters of defeated candidate Raila Odinga took to the streets in protest at the result. Plumes of smoke rose over the Kibari area of Nairobi, a stronghold of Odinga support for the opposition, where pitched battles occurred between rioters and police on Saturday.

Kibaki was controversially re-elected in Kenya’s fourth election since pluralism was introduced in 1992. Kibaki was sworn in less than an hour after the electoral commission declared he had defeated Odinga by a quarter of a million votes. The official commission result was that Kibaki won with 4,584,721 votes against Odinga's 4,352,993 votes. Odinga had led Kibaki in pre-election opinion polls and in early poll tallies released to the media. Odinga walked out of the press conference during which the results were announced, and claimed that Kibaki was stealing victory.

In his acceptance speech yesterday Kibaki said he was “humbled and grateful” to be offered a second five-year term of office. He acknowledged the closeness of the contest and said now should be a time for “healing and reconciliation among all Kenyans”. He pledged to serve for everyone and called for tolerance, peace and harmony. “I will shortly form a clean hands Government that represents the face of Kenya,” he said. “The new PNU Government will incorporate the affiliate parties as well as other friendly parties”.

However not everyone agrees with Kibaki that the elections were free and fair. British foreign minister David Miliband said he had real concerns about reported irregularities and promised to discuss the matter with international partners. He quoted EU observers who said they had not succeeded in establishing the credibility of the tallying process to the satisfaction of all parties and candidates. Miliband stopped short of declaring the result invalid. “Britain looked forward to working with a legitimately elected government of Kenya,” he said. But, he added ambiguously, “its outcome had to be seen by Kenyans to be fair.”

Despite 30,000 monitors on the ground, chief EU observer Alexander Graf Lambsdorff expressed regret that they had not able to address “irregularities”. Lambsdorff said presidential tallies announced in polling stations on the election were inflated by the time they were released by the electoral commission in Nairobi. “'Because of this and other observed irregularities, some doubt remains as to the accuracy of the result of the presidential election as announced today,” he said.

The 76 year old Kibaki has been in power since 2002 and is only Kenya’s third president since in the 43 years since independence. Jomo Kenyatta led the nation out of colonialism in 1964 and he ruled until his death in 1978. His vice-president Daniel arap Moi took the reins for the next 24 years. Initially popular, Moi won two elections in 1992 and 1997 but he was eventually viewed as a despot. Towards the end of his reign, allegations of electoral fraud and corruption took their toll. Moi was implicated in the Goldenberg scandal which saw Kenya lose $600 million in “fictitious” mineral exports in the 1990s.

Moi was forced to step down in 2002 as the constitution barred him from contesting a third election. In the election, Moi supported Jomo Kenyatta’s son Uhuru but he was defeated 2:1 by Kibaki. EU and UN observers declared the election free and fair. Like Moi, Mwai Kibaki had also served as vice president but he fell out of favour with Moi in 1988 and unsuccessfully contested the 1990s elections against him. He formed a rainbow alliance of opposition parties to combat the entrenched power of the Kenyan African National Union (KANU) party founded by Kenyatta and Moi.

Moi’s legacy to Kibaki was a nation racked by corruption. According to Human Rights Watch, Kenya’s system of governance is based on highly centralised and personalised executive power. The average Kenyan was poorer in 2002 than two decades earlier. Kibaki claimed his goal was reconciliation and rebuilding of the economy. The Kenyan economy grew strongly during his five year tenure. However his 2005 attempt to redraw the constitution to give stronger powers to the presidency was rejected by a plebiscite.

His opponent, the 62 year old Raila Odinga (pictured right) was a former cabinet colleague of Kibaki (pictured left). During the campaign he argued that few Kenyans have reaped the benefits of the country's economic successes. After unilaterally declaring he was the winner on Saturday, Odinga now claims he was robbed of victory. Police have warned Odinga that he faces arrest if he goes ahead with a protest tomorrow against the result. Odinga's Orange Democratic Movement planned to meet in Nairobi to present “the People's President” to the nation. The Police Commissioner has declared the meeting illegal “in view of the prevailing security situation” and cautioned that anyone who attends “will face the full force of the law.” It appears unlikely Odinga’s angry supporters will heed the warning.

Sunday, December 30, 2007

Nationalism: A study of Imagined Communities

In a major speech to celebrate the 60th anniversary of Indian independence Nobel laureate economist Amartya Sen has argued that nationalism is a double-edged sword. Sen said nationality was “universalising” and plays a role in uniting people. But he also said nationality is a major source of conflicts, hostilities and violence. “Nationalism can blind one’s vision about other societies and this can play a terrible part especially when one country is powerful vis-à-vis another,” he said. Nationalism, he concluded, is “both a curse and a boon”.

Sen’s points about the complexities of nationalism were borne out elsewhere on the planet. In Nepal, the rebels have gotten their wish to overthrow the monarchy. But their apparent avowal to Maoism could be undone by an equally strong desire to support Nepalese nationalism. In Scotland, optimistic nationalist fervour is on the increase in the wake of the “responsible actions” of the Scottish Nationalist Party since their victory in elections in May. Meanwhile the nationalistic stereotypes of Serbia were upended by the local version of Big Brother, where a boorish Kosovar Serb was quickly evicted from the program while a handsome Bosnian Muslim seemed likely to win.

Nationalism is one of the world’s most potent doctrines; it defines the right of a nation to exist independently based on some shared history, language or culture. The concept has been enormously influential. Millions have died in the twentieth century in the fight for nationalism and the nation state has become the fundamental building block of international relations. It is coded in the very name of the world organisation known as the United Nations.

Nationalism is a relatively modern concept. As late as 1914, dynastic states made up the majority of the world’s political system. One of the best books to look at the history and theory of nationalism is Benedict Anderson’s “Imagined Communities” (2nd edition - 1991). Anderson points out that every successful revolution since World War II has defined itself in national terms: China, Vietnam, Algeria etc. Anderson’s thesis is that the concept of the nation is the most universally legitimate value in the political life of our times. Yet despite this ubiquity, the concepts of nation, nationality and nationalism have all proved difficult to define and analyse.

Anderson’s solution is to define these terms as cultural artefacts. He defines the nation as an “imagined political community” inherently limited and sovereign. It is imagined because the nation’s members (regardless of how small that nation is) will never know of, or meet most of their fellow-members but each member shares a mental comradeship and image of their nation. Nationalism essentially invents nations where they do not exist. Yet each member is aware of the physical limit and boundary of the nation beyond which lies other nations. It is also sovereign as it is a product of the 18th century enlightenment which ended the concept of divinely-ordained dynastic realms. The most important quality of that nation is to be “free”.

It is the mental fraternity of this imagined bond that makes it possible for millions to die for, and kill for, such an idea. The nationalist imagining has many connotations with religion and shares with it an infatuation about death and immortality. Nationalism turns chance into destiny. I can say it is an accident that I am either Irish or Australian, but both Ireland and Australia are “eternal”.

The roots of nationalism can be traced back to the rise of print-capitalism in the 16th century. Prior to 1500, four out of every five books printed were in the ecclesiastical language of Latin. But in the wake of Gutenberg, the vernacular ruled. 200 million books were produced in the next 100 years as the book became the first mass-produced industrial commodity. But even the success of the book was dwarfed by the rise of the newspaper: the “one day best-seller”. The newspaper created an extraordinary mass ceremony in the newly rising mercantile class: a simultaneous consumption of news. The newspapers were written in a vernacular that only those of their language-field understood. They were the embryo of an “imagined community”.

As the influence of newspapers grew, the next major development in the history of nationalism occurred in the western hemisphere. Between 1776 and 1838, a whole series of Creole states emerged in the Americas which self-consciously defined themselves as nations. Latin American countries made the break from Spain because of the fear of lower-class insurrection as Madrid tried to introduce more humane laws on human rights and slavery. It fragmented into 18 nation states that corresponded roughly to the old viceregal administrative provinces. Meanwhile in North America, the advent of printer-journalists such as Benjamin Franklin became a key component of communications and intellectual life that spurred on anti-colonialism.

Meanwhile Europe was still dedicated to the barrier imposed by languages. Lexicographers, grammarians, and philologists were shaping 19th century nationalism. The leaders of nationalist movements in countries such as Finland and Bulgaria were writers and teachers of languages. State bureaucracies were on the rise which opened doors to people of varied social origins. The language of state pushed out obscurer tongues such as Irish and Breton to the margins. The print-languages were elevated and made it easier to arouse popular support to great causes such as the French Revolution.

But not until the after the conflagration of World War I were Europe’s intra-lingual dynasties destroyed. By 1922 the Hapsburgs, Hohenzollerns, Romanovs and Ottomans were gone. The League of Nations showed the way forward but still displayed old biases with non-European nations excluded. By 1945, according to Anderson, the “nation-state tide reached full flood”. In 1975 Portugal, the last of the European empires, shed its colonies. The new African states took on the borders of their old European administrations, and in most cases, their languages. Maps and censuses added to the institutionalisation of these new nation-states.

Of course, this inheritance from colonialism left anomalies all over the world. Passionate nationalism exists in such “nations” as the Karen, Palestine, West Papua, Kurdistan, Biafra, Somaliland, and many others, but there is no nation-state. They have all developed nationalist movements. Many people have made the ultimate sacrifice for their “nation” with colossal numbers prepared to lay down their lives for this “ideal”. As Anderson says, dying for one’s country assumes a moral grandeur which cannot be matched by say, dying for the Labour Party, or the American Medical Association or Amnesty International. All these are organisations a person can join or leave, but a person is deemed to have no choice over their country. But this may change again. After all, contemporary nationalism is the heir to two centuries of historic change. History has not yet ended. Who know how nationalism will evolve in the new imagined spaces of the digital age?

Saturday, December 29, 2007

Darfur: Genocide by other means

While the white world frets over the fate of a few white people getting released from Chad, the killing fields next door in Darfur continues to quietly bury the victims of its casual genocide. Death by war and violence has already claimed a quarter of a million people this century and now malnutrition threatens thousands more. A new UN World Food Programme survey shows the malnutrition rate has actually increased in Darfur since the height of the fighting in 2004. But while the story of the six members of Zoe’s Ark has been reported by over 1,300 news articles, the UN report on Darfur attracted just 117.

It doesn’t help that the Darfur Emergency Food Security and Nutrition Assessment (pdf) is not very sexily named. But the basic fact is that no white people are affected by this assessment. The victims are all, depending on labelling, either “Arab” or “African” or “Darfuri” or “Sudanese”. But what ever they are called, the numbers involved are staggering. Last year, it was estimated that some 3.74 million people were affected by the situation in Darfur.

Other key findings in the report are that the food and security outlook in all three Darfur provinces remains poor for the majority of the population, over two million people. Remote West Darfur (pdf), with no direct border to non-Darfuri Sudan, remains most at risk. 3.7 million out of Darfur’s total of 6.7 million rely on some sort of “humanitarian” assistance. Food production remains scanty, livestock is rare, and markets don’t function due to insecurity and poverty. And the world at large remains, generally, disinterested.

Darfur is well used to the lack of attention. The region was almost unheard of outside Sudan before 2003. Within Sudan it has been mostly neglected since its 19th century colonisation. And even when people started dying in sufficiently large numbers to attract the attention of the media and NGOs, the Americans and their allies were too tied up in Iraq to do anything about it, the UN was hamstrung by lack of funding, and the EU conveniently bickered and contrived to look the other way, like it always does. In the end it was decided it was “African problem” that needed an “African solution” and so the new constituted African Union (AU) had responsibility to solve it.

But this conveniently overlooks history and economics and the very obvious culpability of the West in the tragedy of Darfur. Gerard Prunier entitled his book on Darfur “The Ambiguous Genocide”. By that, Prunier was not trying to claim mass killing did not exist, but rather that the labelling of who did it and who they did it to, and indeed the label of “genocide” itself, have twisted the meaning of what happened in Darfur and how it is generally understood. The west’s quest for pithy explanations of news does not suit Darfur’s complex ethnography and history.

The conventional shorthand explanation is that an “Arab” militia supported by the government in Khartoum, carried out mass atrocities on native “African” tribespeople in a land grab. This explanation overlooks deeper motives and trivialises the ethnic make-up of Central African peoples. It also gives the impression it is violence by Muslim peoples on non-Muslim peoples. However the fact is that almost all Darfuris are Muslims. Unlike the colonial war that the Khartoum government fought against the Christian and animist provinces of the south, the conflict in Darfur had no religious connotation at all. It also overlooked the role played by neighbouring Libya and Chad in the region’s destabilisation.

The population of Darfur is an ethnic mosaic but in skin colour everyone is “black”. Language is often similar too with “African” tribes speaking Arabic. The differences therefore come from Sudanese cultural racism which distinguishes between “Arab” and “zurug” (the local pejorative word for blacks) which may hinge on such factors as the shape of the nose, or the thickness of lips. Intertribal marriages and slave concubines have further muddied the racial waters. And what Sudan considers to be “Arab” would not necessarily be so accepted in the rest of the Arab world. The name Sudan itself derives from Arabic Bilad-al-sudan, "country of the blacks”.

This lack of Arab acceptance makes the Sudanese “Arabs” even more sensitive to its labelling within Sudan. Being described as Arab was a token of civilisation as opposed to African “savagery” and marked out a general change from nomad to agricultural life. This ethnic construction was very much a product of the 20th century. Prior to then, Darfur was the home of migratory peoples south of the forbidding Sahara. Between the 13th and 16th century it was the scene of three major migrations. From the north-west came Nilo-Saharans, from southern Egypt came Nubians and from the north-east came Arab groups. Later, more people arrived from Sudan itself. The last group to arrive were the most powerful. They were the awlad al-Bahar “sons of the river”. The river was the Nile and these riverine Arabs from Khartoum were the most powerful people in the land. They were traders and imams who settled in the towns of Darfur and turned it into a Sudanese province in the 19th century.

Prior to that, Darfur was an independent sultanate dating back to the 14th century, initially led by African tribes. In the 17th century we first hear of the “Fur” people. The Fur had descended from the mountains and overran the plains. Sultan Suleiman “Solungdungo” (the pale man) was the son of a Fur father and Arab mother. The Fur assimilated other tribes to maintain their hegemony and the land became Dar Fur (land of the Fur). At the start of the 19th century, Darfur was a respected political entity, while “Sudan” did not exist as such. The Arabic “land of the blacks” was an arbitrary name that covered many jurisdictions. In colonial times the French also called what is now Mali “Le Soudan”. In the 1821 the then stateless entity to the east of Darfur was invaded by Muhammad Ali, the Ottoman viceroy of Egypt. They defeated the Darfuri who had similar designs and who fled back home to their province.

The Turco-Egyptians gradually extended their colonisation of Sudan, south from Khartoum along the Nile. In 1873 they moved against the sultanate of Darfur and easily conquered it. But in 1881 a quasi-religious organisation known as the “mahdi” under the banner of a mixture of Islamist and Christian Revelatory practices rebelled against the Turkish administration. The Mahdist state then collapsed under an onslaught from the British and they lost control of Darfur back to the sultanate. The British were content to rule with the “lightest of threads” and let the sultan rule as de facto leader of Darfur until 1916.

The fate of Darfur was sealed by World War I. Britain was worried about Turkish propaganda and feared Darfur could become a tool of the Central Powers. Looking beyond the war, they also feared the French influence from Chad in the west. The British invaded Darfur. The sultan resisted and he and his sons were shot dead in an ambush as they tried to flee on horseback. The tragedy of Darfur can be dated to the British occupation. From 1916 onwards, Darfur would only be an appendage of some bigger entity, never an object of attention in itself.

For the next 40 years Darfur was part of the grandly named Anglo-Egyptian Condominium. Although the Egyptians shared naming rights, this was just a clever move by the British to assuage Egyptian ego – the Brits were the real power. A handful of colonists ran the Sudan Political Service and its territory of 2.5 million sq kms. These men included author Wilfred Thesiger who served in Darfur in 1935. But Thesiger was the exception, what little power there was, was isolated in Khartoum. Darfur did not get any attention except when it caused trouble. Rebel Mahdists launched a rebellion from the Darfuri capital Nyala in 1921 and was brutally put down with 800 deaths. But Darfur was mostly ignored, and services including schools and hospitals were non-existent.

In the 1950s, the British were fighting a rearguard action to delay Sudanese independence. Darfur was not considered a threat because of its “backwardness”. Darfur became part of the new nation of Sudan in 1956 and participated in the first elections two years later. The “Umma” party won that election with a significant vote from Darfur. But the region got no thanks from their new political masters and continued to be ignored. The military then took over, with no change for Darfur. In 1964, the Umma won another political victory, again with help from Darfur. Once again however, this carried no clout in Khartoum.

In 1965 neighbouring Chad descended into what was to be a decades-long civil war. Darfur would become central to the conflict with the Chadian guerrilla group Frolinat based in Nyala. The war spilled across the border. In 1969, newly installed Libyan leader Muammar Gaddafy came into the war in support of Frolinat. A brief attack by Gaddafy on Khartoum caused the lasting enmity of the Sudanese government, who in retaliation supported an anti-Libyan, Hissen Habre, as new leader of Chad. Darfur was transformed into a three-way battleground between Libya, Chad and Sudan.

In 1984, famine struck the Sahel and Darfur was devastated. Almost 100,000 people died of starvation in the next 12 months. 80,000 people walked across the country to food camps in Khartoum. The Gaafar Nimeiry regime, in power since 1971, was destabilised and the army took control. The army showed little inclination to solve the food problems of the west and Libya took advantage to invade Darfur. Sudan tacitly accepted the temporary Libyan presence on “their” soil. But Chad did not and fought Darfuri and Libyan troops they accused of supporting Chadian rebel forces.

In 1988 Sudan underwent another army coup. Colonel Omar Hassan al-Bashir came to power in protest at the peace settlement with rebel Southern Sudan. The reality on the ground in Darfur continued to be bleak: the ravages of drought, war and lack of government interest left it on the brink of starvation. Slowly but surely, rebel groups began to form dedicated to the fight against Khartoum. A low intensity civil war began. As the Cold War ended, new cultural labels rose which gave a political identity to the concept of “Arabism”. It was to those that defined themselves as “native Arabs” that Khartoum would look to, to carry out the violence to come.

A hitherto unknown Islamist group known as Justice and Equality Movement (JEM) claimed credit for starting a revolt within Darfur. They issued a “Black Book” which outlined the discrimination that Darfuris encountered in their relations with Khartoum. In 2003 rebels occupied El-Fashir airport in a major victory over government forces. Sudanese hardliners opted for a strong response. The army was not deemed up to the job. Instead they recruited “Arab” militiamen known as Janjaweed (“evil horsemen”). First used in the 1980s, the Janjaweed were paid a good salary and given access to Sudanese armoury. It was to be “counter-insurgency on the cheap”.

Russian Antonov airplanes bombed Darfuri cities targeting civilians. After the air attacks finished, the Janjaweed arrived to finish the job. An orgy of killing, destroying, raping and looting followed. They hurled insults at the “Africans” and herded them into camps. The government issued propaganda that the rebels had demanded independence and a share in Sudan’s growing oil revenues. Neither accusation was true. Masses of refuges fled towards Chad or the centre. Aid was not getting through to the neediest areas.

News began to escape about how bad things were in Darfur. In 2004 the Red Cross spoke of an “agricultural collapse”. Khartoum prevaricated and found continual excuses to delay foreign intervention. The west was more interested in the fate of the peace talks between North and South Sudan. But Amnesty International and the International Crisis Group began to give Darfur the media attention it needed.

When the UN spoke of “genocide” the world’s press began its feeding frenzy. Now there was an angle to the story that would sell newspapers: the first genocide of the 21st century. Moral indignation lasted much of the next 12 months. Deaths continued and Sudan refused to admit culpability and talked of “bandits” and “rebels”. After concerted international pressure, the Janjaweed were forced to stop their killing. But peace remains elusive for Darfuris. Conflicts in Chad continue to have reverberations. Government disinterest continues. The world does not have the stomach to help. None of the "humanitarian" solutions address the political inequity at the heart of the problem. Now malnutrition is about to draw its weapons against the stomachs of an already battered people. But the world’s media have moved on elsewhere, unable to turn this grotty complex tale into a simple and compelling narrative.