44 stories
·
0 followers

'Our minds can be hijacked': tech insiders who fear a smartphone dystopia

1 Share

[unable to retrieve full-text content]

Comments
Read the whole story
windybank
13 days ago
reply
Cammeray, New South Wales, Australia
Share this story
Delete

Ears

5 Comments and 11 Shares
My theory is that most humans have been colonized with alien mind-control slugs that hold the earbuds for them, and the ones who can't wear earbuds are the only surviving free ones.
Read the whole story
windybank
15 days ago
reply
I just thought I wasn’t Apple compatible
Cammeray, New South Wales, Australia
Share this story
Delete
4 public comments
toddgrotenhuis
2 days ago
reply
it me
Indianapolis
mooglemoogle
14 days ago
reply
I’ve heard this complaint so much but I’ve never had a problem with it. I guess I’m normal?? I never would have guessed...
Virginia
unabatedshagie
15 days ago
reply
Glad I'm not the only one.
Scotland, United Kingdom
rickhensley
15 days ago
reply
So I'm not the only one...
Ohio

Facebook – You are the Product

1 Comment

John Lanchester

At the end of June, Mark Zuckerberg announced that Facebook had hit a new level: two billion monthly active users. That number, the company’s preferred ‘metric’ when measuring its own size, means two billion different people used Facebook in the preceding month. It is hard to grasp just how extraordinary that is. Bear in mind that thefacebook – its original name – was launched exclusively for Harvard students in 2004. No human enterprise, no new technology or utility or service, has ever been adopted so widely so quickly. The speed of uptake far exceeds that of the internet itself, let alone ancient technologies such as television or cinema or radio.

Also amazing: as Facebook has grown, its users’ reliance on it has also grown. The increase in numbers is not, as one might expect, accompanied by a lower level of engagement. More does not mean worse – or worse, at least, from Facebook’s point of view. On the contrary. In the far distant days of October 2012, when Facebook hit one billion users, 55 per cent of them were using it every day. At two billion, 66 per cent are. Its user base is growing at 18 per cent a year – which you’d have thought impossible for a business already so enormous. Facebook’s biggest rival for logged-in users is YouTube, owned by its deadly rival Alphabet (the company formerly known as Google), in second place with 1.5 billion monthly users. Three of the next four biggest apps, or services, or whatever one wants to call them, are WhatsApp, Messenger and Instagram, with 1.2 billion, 1.2 billion, and 700 million users respectively (the Chinese app WeChat is the other one, with 889 million). Those three entities have something in common: they are all owned by Facebook. No wonder the company is the fifth most valuable in the world, with a market capitalisation of $445 billion.

Zuckerberg’s news about Facebook’s size came with an announcement which may or may not prove to be significant. He said that the company was changing its ‘mission statement’, its version of the canting pieties beloved of corporate America. Facebook’s mission used to be ‘making the world more open and connected’. A non-Facebooker reading that is likely to ask: why? Connection is presented as an end in itself, an inherently and automatically good thing. Is it, though? Flaubert was sceptical about trains because he thought (in Julian Barnes’s paraphrase) that ‘the railway would merely permit more people to move about, meet and be stupid.’ You don’t have to be as misanthropic as Flaubert to wonder if something similar isn’t true about connecting people on Facebook. For instance, Facebook is generally agreed to have played a big, perhaps even a crucial, role in the election of Donald Trump. The benefit to humanity is not clear. This thought, or something like it, seems to have occurred to Zuckerberg, because the new mission statement spells out a reason for all this connectedness. It says that the new mission is to ‘give people the power to build community and bring the world closer together’.

Hmm. Alphabet’s mission statement, ‘to organise the world’s information and make it universally accessible and useful’, came accompanied by the maxim ‘Don’t be evil,’ which has been the source of a lot of ridicule: Steve Jobs called it ‘bullshit’.[1] Which it is, but it isn’t only bullshit. Plenty of companies, indeed entire industries, base their business model on being evil. The insurance business, for instance, depends on the fact that insurers charge customers more than their insurance is worth; that’s fair enough, since if they didn’t do that they wouldn’t be viable as businesses. What isn’t fair is the panoply of cynical techniques that many insurers use to avoid, as far as possible, paying out when the insured-against event happens. Just ask anyone who has had a property suffer a major mishap. It’s worth saying ‘Don’t be evil,’ because lots of businesses are. This is especially an issue in the world of the internet. Internet companies are working in a field that is poorly understood (if understood at all) by customers and regulators. The stuff they’re doing, if they’re any good at all, is by definition new. In that overlapping area of novelty and ignorance and unregulation, it’s well worth reminding employees not to be evil, because if the company succeeds and grows, plenty of chances to be evil are going to come along.

Google and Facebook have both been walking this line from the beginning. Their styles of doing so are different. An internet entrepreneur I know has had dealings with both companies. ‘YouTube knows they have lots of dirty things going on and are keen to try and do some good to alleviate it,’ he told me. I asked what he meant by ‘dirty’. ‘Terrorist and extremist content, stolen content, copyright violations. That kind of thing. But Google in my experience knows that there are ambiguities, moral doubts, around some of what they do, and at least they try to think about it. Facebook just doesn’t care. When you’re in a room with them you can tell. They’re’ – he took a moment to find the right word – ‘scuzzy’.

That might sound harsh. There have, however, been ethical problems and ambiguities about Facebook since the moment of its creation, a fact we know because its creator was live-blogging at the time. The scene is as it was recounted in Aaron Sorkin’s movie about the birth of Facebook, The Social Network. While in his first year at Harvard, Zuckerberg suffered a romantic rebuff. Who wouldn’t respond to this by creating a website where undergraduates’ pictures are placed side by side so that users of the site can vote for the one they find more attractive? (The film makes it look as if it was only female undergraduates: in real life it was both.) The site was called Facemash. In the great man’s own words, at the time:

I’m a little intoxicated, I’m not gonna lie. So what if it’s not even 10 p.m. and it’s a Tuesday night? What? The Kirkland dormitory facebook is open on my desktop and some of these people have pretty horrendous facebook pics. I almost want to put some of these faces next to pictures of some farm animals and have people vote on which is the more attractive … Let the hacking begin.

As Tim Wu explains in his energetic and original new book The Attention Merchants, a ‘facebook’ in the sense Zuckerberg uses it here ‘traditionally referred to a physical booklet produced at American universities to promote socialisation in the way that “Hi, My Name Is” stickers do at events; the pages consisted of rows upon rows of head shots with the corresponding name’. Harvard was already working on an electronic version of its various dormitory facebooks. The leading social network, Friendster, already had three million users. The idea of putting these two things together was not entirely novel, but as Zuckerberg said at the time, ‘I think it’s kind of silly that it would take the University a couple of years to get around to it. I can do it better than they can, and I can do it in a week.’

Wu argues that capturing and reselling attention has been the basic model for a large number of modern businesses, from posters in late 19th-century Paris, through the invention of mass-market newspapers that made their money not through circulation but through ad sales, to the modern industries of advertising and ad-funded TV. Facebook is in a long line of such enterprises, though it might be the purest ever example of a company whose business is the capture and sale of attention. Very little new thinking was involved in its creation. As Wu observes, Facebook is ‘a business with an exceedingly low ratio of invention to success’. What Zuckerberg had instead of originality was the ability to get things done and to see the big issues clearly. The crucial thing with internet start-ups is the ability to execute plans and to adapt to changing circumstances. It’s Zuck’s skill at doing that – at hiring talented engineers, and at navigating the big-picture trends in his industry – that has taken his company to where it is today. Those two huge sister companies under Facebook’s giant wing, Instagram and WhatsApp, were bought for $1 billion and $19 billion respectively, at a point when they had no revenue. No banker or analyst or sage could have told Zuckerberg what those acquisitions were worth; nobody knew better than he did. He could see where things were going and help make them go there. That talent turned out to be worth several hundred billion dollars.

Jesse Eisenberg’s brilliant portrait of Zuckerberg in The Social Network is misleading, as Antonio García Martínez, a former Facebook manager, argues in Chaos Monkeys, his entertainingly caustic book about his time at the company. The movie Zuckerberg is a highly credible character, a computer genius located somewhere on the autistic spectrum with minimal to non-existent social skills. But that’s not what the man is really like. In real life, Zuckerberg was studying for a degree with a double concentration in computer science and – this is the part people tend to forget – psychology. People on the spectrum have a limited sense of how other people’s minds work; autists, it has been said, lack a ‘theory of mind’. Zuckerberg, not so much. He is very well aware of how people’s minds work and in particular of the social dynamics of popularity and status. The initial launch of Facebook was limited to people with a Harvard email address; the intention was to make access to the site seem exclusive and aspirational. (And also to control site traffic so that the servers never went down. Psychology and computer science, hand in hand.) Then it was extended to other elite campuses in the US. When it launched in the UK, it was limited to Oxbridge and the LSE. The idea was that people wanted to look at what other people like them were doing, to see their social networks, to compare, to boast and show off, to give full rein to every moment of longing and envy, to keep their noses pressed against the sweet-shop window of others’ lives.

This focus attracted the attention of Facebook’s first external investor, the now notorious Silicon Valley billionaire Peter Thiel. Again, The Social Network gets it right: Thiel’s $500,000 investment in 2004 was crucial to the success of the company. But there was a particular reason Facebook caught Thiel’s eye, rooted in a byway of intellectual history. In the course of his studies at Stanford – he majored in philosophy – Thiel became interested in the ideas of the US-based French philosopher René Girard, as advocated in his most influential book, Things Hidden since the Foundation of the World. Girard’s big idea was something he called ‘mimetic desire’. Human beings are born with a need for food and shelter. Once these fundamental necessities of life have been acquired, we look around us at what other people are doing, and wanting, and we copy them. In Thiel’s summary, the idea is ‘that imitation is at the root of all behaviour’.

Girard was a Christian, and his view of human nature is that it is fallen. We don’t know what we want or who we are; we don’t really have values and beliefs of our own; what we have instead is an instinct to copy and compare. We are homo mimeticus. ‘Man is the creature who does not know what to desire, and who turns to others in order to make up his mind. We desire what others desire because we imitate their desires.’ Look around, ye petty, and compare. The reason Thiel latched onto Facebook with such alacrity was that he saw in it for the first time a business that was Girardian to its core: built on people’s deep need to copy. ‘Facebook first spread by word of mouth, and it’s about word of mouth, so it’s doubly mimetic,’ Thiel said. ‘Social media proved to be more important than it looked, because it’s about our natures.’ We are keen to be seen as we want to be seen, and Facebook is the most popular tool humanity has ever had with which to do that.

*

The view of human nature implied by these ideas is pretty dark. If all people want to do is go and look at other people so that they can compare themselves to them and copy what they want – if that is the final, deepest truth about humanity and its motivations – then Facebook doesn’t really have to take too much trouble over humanity’s welfare, since all the bad things that happen to us are things we are doing to ourselves. For all the corporate uplift of its mission statement, Facebook is a company whose essential premise is misanthropic. It is perhaps for that reason that Facebook, more than any other company of its size, has a thread of malignity running through its story. The high-profile, tabloid version of this has come in the form of incidents such as the live-streaming of rapes, suicides, murders and cop-killings. But this is one of the areas where Facebook seems to me relatively blameless. People live-stream these terrible things over the site because it has the biggest audience; if Snapchat or Periscope were bigger, they’d be doing it there instead.

In many other areas, however, the site is far from blameless. The highest-profile recent criticisms of the company stem from its role in Trump’s election. There are two components to this, one of them implicit in the nature of the site, which has an inherent tendency to fragment and atomise its users into like-minded groups. The mission to ‘connect’ turns out to mean, in practice, connect with people who agree with you. We can’t prove just how dangerous these ‘filter bubbles’ are to our societies, but it seems clear that they are having a severe impact on our increasingly fragmented polity. Our conception of ‘we’ is becoming narrower.

This fragmentation created the conditions for the second strand of Facebook’s culpability in the Anglo-American political disasters of the last year. The portmanteau terms for these developments are ‘fake news’ and ‘post-truth’, and they were made possible by the retreat from a general agora of public debate into separate ideological bunkers. In the open air, fake news can be debated and exposed; on Facebook, if you aren’t a member of the community being served the lies, you’re quite likely never to know that they are in circulation. It’s crucial to this that Facebook has no financial interest in telling the truth. No company better exemplifies the internet-age dictum that if the product is free, you are the product. Facebook’s customers aren’t the people who are on the site: its customers are the advertisers who use its network and who relish its ability to direct ads to receptive audiences. Why would Facebook care if the news streaming over the site is fake? Its interest is in the targeting, not in the content. This is probably one reason for the change in the company’s mission statement. If your only interest is in connecting people, why would you care about falsehoods? They might even be better than the truth, since they are quicker to identify the like-minded. The newfound ambition to ‘build communities’ makes it seem as if the company is taking more of an interest in the consequence of the connections it fosters.

Fake news is not, as Facebook has acknowledged, the only way it was used to influence the outcome of the 2016 presidential election. On 6 January 2017 the director of national intelligence published a report saying that the Russians had waged an internet disinformation campaign to damage Hillary Clinton and help Trump. ‘Moscow’s influence campaign followed a Russian messaging strategy that blends covert intelligence operations – such as cyber-activity – with overt efforts by Russian government agencies, state-funded media, third-party intermediaries, and paid social media users or “trolls”,’ the report said. At the end of April, Facebook got around to admitting this (by then) fairly obvious truth, in an interesting paper published by its internal security division. ‘Fake news’, they argue, is an unhelpful, catch-all term because misinformation is in fact spread in a variety of ways:

Information (or Influence) Operations – Actions taken by governments or organised non-state actors to distort domestic or foreign political sentiment.

False News – News articles that purport to be factual, but which contain intentional misstatements of fact with the intention to arouse passions, attract viewership, or deceive.

False Amplifiers – Co-ordinated activity by inauthentic accounts with the intent of manipulating political discussion (e.g. by discouraging specific parties from participating in discussion, or amplifying sensationalistic voices over others).

Disinformation – Inaccurate or manipulated information/content that is spread intentionally. This can include false news, or it can involve more subtle methods, such as false flag operations, feeding inaccurate quotes or stories to innocent intermediaries, or knowingly amplifying biased or misleading information.

The company is promising to treat this problem or set of problems as seriously as it treats such other problems as malware, account hacking and spam. We’ll see. One man’s fake news is another’s truth-telling, and Facebook works hard at avoiding responsibility for the content on its site – except for sexual content, about which it is super-stringent. Nary a nipple on show. It’s a bizarre set of priorities, which only makes sense in an American context, where any whiff of explicit sexuality would immediately give the site a reputation for unwholesomeness. Photos of breastfeeding women are banned and rapidly get taken down. Lies and propaganda are fine.

The key to understanding this is to think about what advertisers want: they don’t want to appear next to pictures of breasts because it might damage their brands, but they don’t mind appearing alongside lies because the lies might be helping them find the consumers they’re trying to target. In Move Fast and Break Things, his polemic against the ‘digital-age robber barons’, Jonathan Taplin points to an analysis on Buzzfeed: ‘In the final three months of the US presidential campaign, the top-performing fake election news stories on Facebook generated more engagement than the top stories from major news outlets such as the New York Times, Washington Post, Huffington Post, NBC News and others.’ This doesn’t sound like a problem Facebook will be in any hurry to fix.

The fact is that fraudulent content, and stolen content, are rife on Facebook, and the company doesn’t really mind, because it isn’t in its interest to mind. Much of the video content on the site is stolen from the people who created it. An illuminating YouTube video from Kurzgesagt, a German outfit that makes high-quality short explanatory films, notes that in 2015, 725 of Facebook’s top one thousand most viewed videos were stolen. This is another area where Facebook’s interests contradict society’s. We may collectively have an interest in sustaining creative and imaginative work in many different forms and on many platforms. Facebook doesn’t. It has two priorities, as Martínez explains in Chaos Monkeys: growth and monetisation. It simply doesn’t care where the content comes from. It is only now starting to care about the perception that much of the content is fraudulent, because if that perception were to become general, it might affect the amount of trust and therefore the amount of time people give to the site.

Zuckerberg himself has spoken up on this issue, in a Facebook post addressing the question of ‘Facebook and the election’. After a certain amount of boilerplate bullshit (‘Our goal is to give every person a voice. We believe deeply in people’), he gets to the nub of it. ‘Of all the content on Facebook, more than 99 per cent of what people see is authentic. Only a very small amount is fake news and hoaxes.’ More than one Facebook user pointed out that in their own news feed, Zuckerberg’s post about authenticity ran next to fake news. In one case, the fake story pretended to be from the TV sports channel ESPN. When it was clicked on, it took users to an ad selling a diet supplement. As the writer Doc Searls pointed out, it’s a double fraud, ‘outright lies from a forged source’, which is quite something to have right slap next to the head of Facebook boasting about the absence of fraud. Evan Williams, co-founder of Twitter and founder of the long-read specialist Medium, found the same post by Zuckerberg next to a different fake ESPN story and another piece of fake news purporting to be from CNN, announcing that Congress had disqualified Trump from office. When clicked-through, that turned out to be from a company offering a 12-week programme to strengthen toes. (That’s right: strengthen toes.) Still, we now know that Zuck believes in people. That’s the main thing.

*

A neutral observer might wonder if Facebook’s attitude to content creators is sustainable. Facebook needs content, obviously, because that’s what the site consists of: content that other people have created. It’s just that it isn’t too keen on anyone apart from Facebook making any money from that content. Over time, that attitude is profoundly destructive to the creative and media industries. Access to an audience – that unprecedented two billion people – is a wonderful thing, but Facebook isn’t in any hurry to help you make money from it. If the content providers all eventually go broke, well, that might not be too much of a problem. There are, for now, lots of willing providers: anyone on Facebook is in a sense working for Facebook, adding value to the company. In 2014, the New York Times did the arithmetic and found that humanity was spending 39,757 collective years on the site, every single day. Jonathan Taplin points out that this is ‘almost fifteen million years of free labour per year’. That was back when it had a mere 1.23 billion users.

Taplin has worked in academia and in the film industry. The reason he feels so strongly about these questions is that he started out in the music business, as manager of The Band, and was on hand to watch the business being destroyed by the internet. What had been a $20 billion industry in 1999 was a $7 billion industry 15 years later. He saw musicians who had made a good living become destitute. That didn’t happen because people had stopped listening to their music – more people than ever were listening to it – but because music had become something people expected to be free. YouTube is the biggest source of music in the world, playing billions of tracks annually, but in 2015 musicians earned less from it and from its ad-supported rivals than they earned from sales of vinyl. Not CDs and recordings in general: vinyl.

Something similar has happened in the world of journalism. Facebook is in essence an advertising company which is indifferent to the content on its site except insofar as it helps to target and sell advertisements. A version of Gresham’s law is at work, in which fake news, which gets more clicks and is free to produce, drives out real news, which often tells people things they don’t want to hear, and is expensive to produce. In addition, Facebook uses an extensive set of tricks to increase its traffic and the revenue it makes from targeting ads, at the expense of the news-making institutions whose content it hosts. Its news feed directs traffic at you based not on your interests, but on how to make the maximum amount of advertising revenue from you. In September 2016, Alan Rusbridger, the former editor of the Guardian, told a Financial Times conference that Facebook had ‘sucked up $27 million’ of the newspaper’s projected ad revenue that year. ‘They are taking all the money because they have algorithms we don’t understand, which are a filter between what we do and how people receive it.’

This goes to the heart of the question of what Facebook is and what it does. For all the talk about connecting people, building community, and believing in people, Facebook is an advertising company. Martínez gives the clearest account both of how it ended up like that, and how Facebook advertising works. In the early years of Facebook, Zuckerberg was much more interested in the growth side of the company than in the monetisation. That changed when Facebook went in search of its big payday at the initial public offering, the shining day when shares in a business first go on sale to the general public. This is a huge turning-point for any start-up: in the case of many tech industry workers, the hope and expectation associated with ‘going public’ is what attracted them to their firm in the first place, and/or what has kept them glued to their workstations. It’s the point where the notional money of an early-days business turns into the real cash of a public company.

Martínez was there at the very moment when Zuck got everyone together to tell them they were going public, the moment when all Facebook employees knew that they were about to become rich:

I had chosen a seat behind a detached pair, who on further inspection turned out to be Chris Cox, head of FB product, and Naomi Gleit, a Harvard grad who joined as employee number 29, and was now reputed to be the current longest-serving employee other than Mark.

Naomi, between chats with Cox, was clicking away on her laptop, paying little attention to the Zuckian harangue. I peered over her shoulder at her screen. She was scrolling down an email with a number of links, and progressively clicking each one into existence as another tab on her browser. Clickathon finished, she began lingering on each with an appraiser’s eye. They were real estate listings, each for a different San Francisco property.

Martínez took note of one of the properties and looked it up later. Price: $2.4 million. He is fascinating, and fascinatingly bitter, on the subject of class and status differences in Silicon Valley, in particular the never publicly discussed issue of the huge gulf between early employees in a company, who have often been made unfathomably rich, and the wage slaves who join the firm later in its story. ‘The protocol is not to talk about it at all publicly.’ But, as Bonnie Brown, a masseuse at Google in the early days, wrote in her memoir, ‘a sharp contrast developed between Googlers working side by side. While one was looking at local movie times on their monitor, the other was booking a flight to Belize for the weekend. How was the conversation on Monday morning going to sound now?’

When the time came for the IPO, Facebook needed to turn from a company with amazing growth to one that was making amazing money. It was already making some, thanks to its sheer size – as Martínez observes, ‘a billion times any number is still a big fucking number’ – but not enough to guarantee a truly spectacular valuation on launch. It was at this stage that the question of how to monetise Facebook got Zuckerberg’s full attention. It’s interesting, and to his credit, that he hadn’t put too much focus on it before – perhaps because he isn’t particularly interested in money per se. But he does like to win.

The solution was to take the huge amount of information Facebook has about its ‘community’ and use it to let advertisers target ads with a specificity never known before, in any medium. Martínez: ‘It can be demographic in nature (e.g. 30-to-40-year-old females), geographic (people within five miles of Sarasota, Florida), or even based on Facebook profile data (do you have children; i.e. are you in the mommy segment?).’ Taplin makes the same point:

If I want to reach women between the ages of 25 and 30 in zip code 37206 who like country music and drink bourbon, Facebook can do that. Moreover, Facebook can often get friends of these women to post a ‘sponsored story’ on a targeted consumer’s news feed, so it doesn’t feel like an ad. As Zuckerberg said when he introduced Facebook Ads, ‘Nothing influences people more than a recommendation from a trusted friend. A trusted referral is the Holy Grail of advertising.’

That was the first part of the monetisation process for Facebook, when it turned its gigantic scale into a machine for making money. The company offered advertisers an unprecedentedly precise tool for targeting their ads at particular consumers. (Particular segments of voters too can be targeted with complete precision. One instance from 2016 was an anti-Clinton ad repeating a notorious speech she made in 1996 on the subject of ‘super-predators’. The ad was sent to African-American voters in areas where the Republicans were trying, successfully as it turned out, to suppress the Democrat vote. Nobody else saw the ads.)

The second big shift around monetisation came in 2012 when internet traffic began to switch away from desktop computers towards mobile devices. If you do most of your online reading on a desktop, you are in a minority. The switch was a potential disaster for all businesses which relied on internet advertising, because people don’t much like mobile ads, and were far less likely to click on them than on desktop ads. In other words, although general internet traffic was increasing rapidly, because the growth was coming from mobile, the traffic was becoming proportionately less valuable. If the trend were to continue, every internet business that depended on people clicking links – i.e. pretty much all of them, but especially the giants like Google and Facebook – would be worth much less money.

Facebook solved the problem by means of a technique called ‘onboarding’. As Martínez explains it, the best way to think about this is to consider our various kinds of name and address.

For example, if Bed, Bath and Beyond wants to get my attention with one of its wonderful 20 per cent off coupons, it calls out:

Antonio García Martínez
1 Clarence Place #13
San Francisco, CA 94107

If it wants to reach me on my mobile device, my name there is:

38400000-8cfo-11bd-b23e-10b96e40000d

That’s my quasi-immutable device ID, broadcast hundreds of times a day on mobile ad exchanges.

On my laptop, my name is this:

07J6yJPMB9juTowar.AWXGQnGPA1MCmThgb9wN4vLoUpg.BUUtWg.rg.FTN.0.AWUxZtUf

This is the content of the Facebook re-targeting cookie, which is used to target ads-are-you based on your mobile browsing.

Though it may not be obvious, each of these keys is associated with a wealth of our personal behaviour data: every website we’ve been to, many things we’ve bought in physical stores, and every app we’ve used and what we did there … The biggest thing going on in marketing right now, what is generating tens of billions of dollars in investment and endless scheming inside the bowels of Facebook, Google, Amazon and Apple, is how to tie these different sets of names together, and who controls the links. That’s it.

Facebook already had a huge amount of information about people and their social networks and their professed likes and dislikes.[2] After waking up to the importance of monetisation, they added to their own data a huge new store of data about offline, real-world behaviour, acquired through partnerships with big companies such as Experian, which have been monitoring consumer purchases for decades via their relationships with direct marketing firms, credit card companies, and retailers. There doesn’t seem to be a one-word description of these firms: ‘consumer credit agencies’ or something similar about sums it up. Their reach is much broader than that makes it sound, though.[3] Experian says its data is based on more than 850 million records and claims to have information on 49.7 million UK adults living in 25.2 million households in 1.73 million postcodes. These firms know all there is to know about your name and address, your income and level of education, your relationship status, plus everywhere you’ve ever paid for anything with a card. Facebook could now put your identity together with the unique device identifier on your phone.

That was crucial to Facebook’s new profitability. On mobiles, people tend to prefer the internet to apps, which corral the information they gather and don’t share it with other companies. A game app on your phone is unlikely to know anything about you except the level you’ve got to on that particular game. But because everyone in the world is on Facebook, the company knows everyone’s phone identifier. It was now able to set up an ad server delivering far better targeted mobile ads than anyone else could manage, and it did so in a more elegant and well-integrated form than anyone else had managed.

So Facebook knows your phone ID and can add it to your Facebook ID. It puts that together with the rest of your online activity: not just every site you’ve ever visited, but every click you’ve ever made – the Facebook button tracks every Facebook user, whether they click on it or not. Since the Facebook button is pretty much ubiquitous on the net, this means that Facebook sees you, everywhere. Now, thanks to its partnerships with the old-school credit firms, Facebook knew who everybody was, where they lived, and everything they’d ever bought with plastic in a real-world offline shop.[4] All this information is used for a purpose which is, in the final analysis, profoundly bathetic. It is to sell you things via online ads.

The ads work on two models. In one of them, advertisers ask Facebook to target consumers from a particular demographic – our thirty-something bourbon-drinking country music fan, or our African American in Philadelphia who was lukewarm about Hillary. But Facebook also delivers ads via a process of online auctions, which happen in real time whenever you click on a website. Because every website you’ve ever visited (more or less) has planted a cookie on your web browser, when you go to a new site, there is a real-time auction, in millionths of a second, to decide what your eyeballs are worth and what ads should be served to them, based on what your interests, and income level and whatnot, are known to be. This is the reason ads have that disconcerting tendency to follow you around, so that you look at a new telly or a pair of shoes or a holiday destination, and they’re still turning up on every site you visit weeks later. This was how, by chucking talent and resources at the problem, Facebook was able to turn mobile from a potential revenue disaster to a great hot steamy geyser of profit.

What this means is that even more than it is in the advertising business, Facebook is in the surveillance business. Facebook, in fact, is the biggest surveillance-based enterprise in the history of mankind. It knows far, far more about you than the most intrusive government has ever known about its citizens. It’s amazing that people haven’t really understood this about the company. I’ve spent time thinking about Facebook, and the thing I keep coming back to is that its users don’t realise what it is the company does. What Facebook does is watch you, and then use what it knows about you and your behaviour to sell ads. I’m not sure there has ever been a more complete disconnect between what a company says it does – ‘connect’, ‘build communities’ – and the commercial reality. Note that the company’s knowledge about its users isn’t used merely to target ads but to shape the flow of news to them. Since there is so much content posted on the site, the algorithms used to filter and direct that content are the thing that determines what you see: people think their news feed is largely to do with their friends and interests, and it sort of is, with the crucial proviso that it is their friends and interests as mediated by the commercial interests of Facebook. Your eyes are directed towards the place where they are most valuable for Facebook.

*

I’m left wondering what will happen when and if this $450 billion penny drops. Wu’s history of attention merchants shows that there is a suggestive pattern here: that a boom is more often than not followed by a backlash, that a period of explosive growth triggers a public and sometimes legislative reaction. Wu’s first example is the draconian anti-poster laws introduced in early 20th-century Paris (and still in force – one reason the city is by contemporary standards undisfigured by ads). As Wu says, ‘when the commodity in question is access to people’s minds, the perpetual quest for growth ensures that forms of backlash, both major and minor, are all but inevitable.’ Wu calls a minor form of this phenomenon the ‘disenchantment effect’.

Facebook seems vulnerable to these disenchantment effects. One place they are likely to begin is in the core area of its business model – ad-selling. The advertising it sells is ‘programmatic’, i.e. determined by computer algorithms that match the customer to the advertiser and deliver ads accordingly, via targeting and/or online auctions. The problem with this from the customer’s point of view – remember, the customer here is the advertiser, not the Facebook user – is that a lot of the clicks on these ads are fake. There is a mismatch of interests here. Facebook wants clicks, because that’s how it gets paid: when ads are clicked on. But what if the clicks aren’t real but are instead automated clicks from fake accounts run by computer bots? This is a well-known problem, which particularly affects Google, because it’s easy to set up a site, allow it to host programmatic ads, then set up a bot to click on those ads, and collect the money that comes rolling in. On Facebook the fraudulent clicks are more likely to be from competitors trying to drive each others’ costs up.

The industry publication Ad Week estimates the annual cost of click fraud at $7 billion, about a sixth of the entire market. One single fraud site, Methbot, whose existence was exposed at the end of last year, uses a network of hacked computers to generate between three and five million dollars’ worth of fraudulent clicks every day. Estimates of fraudulent traffic’s market share are variable, with some guesses coming in at around 50 per cent; some website owners say their own data indicates a fraudulent-click rate of 90 per cent. This is by no means entirely Facebook’s problem, but it isn’t hard to imagine how it could lead to a big revolt against ‘ad tech’, as this technology is generally known, on the part of the companies who are paying for it. I’ve heard academics in the field say that there is a form of corporate groupthink in the world of the big buyers of advertising, who are currently responsible for directing large parts of their budgets towards Facebook. That mindset could change. Also, many of Facebook’s metrics are tilted to catch the light at the angle which makes them look shiniest. A video is counted as ‘viewed’ on Facebook if it runs for three seconds, even if the user is scrolling past it in her news feed and even if the sound is off. Many Facebook videos with hundreds of thousands of ‘views’, if counted by the techniques that are used to count television audiences, would have no viewers at all.

A customers’ revolt could overlap with a backlash from regulators and governments. Google and Facebook have what amounts to a monopoly on digital advertising. That monopoly power is becoming more and more important as advertising spend migrates online. Between them, they have already destroyed large sections of the newspaper industry. Facebook has done a huge amount to lower the quality of public debate and to ensure that it is easier than ever before to tell what Hitler approvingly called ‘big lies’ and broadcast them to a big audience. The company has no business need to care about that, but it is the kind of issue that could attract the attention of regulators.

That isn’t the only external threat to the Google/Facebook duopoly. The US attitude to anti-trust law was shaped by Robert Bork, the judge whom Reagan nominated for the Supreme Court but the Senate failed to confirm. Bork’s most influential legal stance came in the area of competition law. He promulgated the doctrine that the only form of anti-competitive action which matters concerns the prices paid by consumers. His idea was that if the price is falling that means the market is working, and no questions of monopoly need be addressed. This philosophy still shapes regulatory attitudes in the US and it’s the reason Amazon, for instance, has been left alone by regulators despite the manifestly monopolistic position it holds in the world of online retail, books especially.

The big internet enterprises seem invulnerable on these narrow grounds. Or they do until you consider the question of individualised pricing. The huge data trail we all leave behind as we move around the internet is increasingly used to target us with prices which aren’t like the tags attached to goods in a shop. On the contrary, they are dynamic, moving with our perceived ability to pay.[5] Four researchers based in Spain studied the phenomenon by creating automated personas to behave as if, in one case, ‘budget conscious’ and in another ‘affluent’, and then checking to see if their different behaviour led to different prices. It did: a search for headphones returned a set of results which were on average four times more expensive for the affluent persona. An airline-ticket discount site charged higher fares to the affluent consumer. In general, the location of the searcher caused prices to vary by as much as 166 per cent. So in short, yes, personalised prices are a thing, and the ability to create them depends on tracking us across the internet. That seems to me a prima facie violation of the American post-Bork monopoly laws, focused as they are entirely on price. It’s sort of funny, and also sort of grotesque, that an unprecedentedly huge apparatus of consumer surveillance is fine, apparently, but an unprecedentedly huge apparatus of consumer surveillance which results in some people paying higher prices may well be illegal.

Perhaps the biggest potential threat to Facebook is that its users might go off it. Two billion monthly active users is a lot of people, and the ‘network effects’ – the scale of the connectivity – are, obviously, extraordinary. But there are other internet companies which connect people on the same scale – Snapchat has 166 million daily users, Twitter 328 million monthly users – and as we’ve seen in the disappearance of Myspace, the onetime leader in social media, when people change their minds about a service, they can go off it hard and fast.

For that reason, were it to be generally understood that Facebook’s business model is based on surveillance, the company would be in danger. The one time Facebook did poll its users about the surveillance model was in 2011, when it proposed a change to its terms and conditions – the change that underpins the current template for its use of data. The result of the poll was clear: 90 per cent of the vote was against the changes. Facebook went ahead and made them anyway, on the grounds that so few people had voted. No surprise there, neither in the users’ distaste for surveillance nor in the company’s indifference to that distaste. But this is something which could change.

The other thing that could happen at the level of individual users is that people stop using Facebook because it makes them unhappy. This isn’t the same issue as the scandal in 2014 when it turned out that social scientists at the company had deliberately manipulated some people’s news feeds to see what effect, if any, it had on their emotions. The resulting paper, published in the Proceedings of the National Academy of Sciences, was a study of ‘social contagion’, or the transfer of emotion among groups of people, as a result of a change in the nature of the stories seen by 689,003 users of Facebook. ‘When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.’ The scientists seem not to have considered how this information would be received, and the story played quite big for a while.

Perhaps the fact that people already knew this story accidentally deflected attention from what should have been a bigger scandal, exposed earlier this year in a paper from the American Journal of Epidemiology. The paper was titled ‘Association of Facebook Use with Compromised Well-Being: A Longitudinal Study’. The researchers found quite simply that the more people use Facebook, the more unhappy they are. A 1 per cent increase in ‘likes’ and clicks and status updates was correlated with a 5 to 8 per cent decrease in mental health. In addition, they found that the positive effect of real-world interactions, which enhance well-being, was accurately paralleled by the ‘negative associations of Facebook use’. In effect people were swapping real relationships which made them feel good for time on Facebook which made them feel bad. That’s my gloss rather than that of the scientists, who take the trouble to make it clear that this is a correlation rather than a definite causal relationship, but they did go so far – unusually far – as to say that the data ‘suggests a possible trade-off between offline and online relationships’. This isn’t the first time something like this effect has been found. To sum up: there is a lot of research showing that Facebook makes people feel like shit. So maybe, one day, people will stop using it.[6]

*

What, though, if none of the above happens? What if advertisers don’t rebel, governments don’t act, users don’t quit, and the good ship Zuckerberg and all who sail in her continues blithely on? We should look again at that figure of two billion monthly active users. The total number of people who have any access to the internet – as broadly defined as possible, to include the slowest dial-up speeds and creakiest developing-world mobile service, as well as people who have access but don’t use it – is three and a half billion. Of those, about 750 million are in China and Iran, which block Facebook. Russians, about a hundred million of whom are on the net, tend not to use Facebook because they prefer their native copycat site VKontakte. So put the potential audience for the site at 2.6 billion. In developed countries where Facebook has been present for years, use of the site peaks at about 75 per cent of the population (that’s in the US). That would imply a total potential audience for Facebook of 1.95 billion. At two billion monthly active users, Facebook has already gone past that number, and is running out of connected humans. Martínez compares Zuckerberg to Alexander the Great, weeping because he has no more worlds to conquer. Perhaps this is one reason for the early signals Zuck has sent about running for president – the fifty-state pretending-to-give-a-shit tour, the thoughtful-listening pose he’s photographed in while sharing milkshakes in (Presidential Ambitions klaxon!) an Iowa diner.

Whatever comes next will take us back to those two pillars of the company, growth and monetisation. Growth can only come from connecting new areas of the planet. An early experiment came in the form of Free Basics, a program offering internet connectivity to remote villages in India, with the proviso that the range of sites on offer should be controlled by Facebook. ‘Who could possibly be against this?’ Zuckerberg wrote in the Times of India. The answer: lots and lots of angry Indians. The government ruled that Facebook shouldn’t be able to ‘shape users’ internet experience’ by restricting access to the broader internet. A Facebook board member tweeted that ‘anti-colonialism has been economically catastrophic for the Indian people for decades. Why stop now?’ As Taplin points out, that remark ‘unwittingly revealed a previously unspoken truth: Facebook and Google are the new colonial powers.’

So the growth side of the equation is not without its challenges, technological as well as political. Google (which has a similar running-out-of-humans problem) is working on ‘Project Loon’, ‘a network of balloons travelling on the edge of space, designed to extend internet connectivity to people in rural and remote areas worldwide’. Facebook is working on a project involving a solar-powered drone called the Aquila, which has the wingspan of a commercial airliner, weighs less than a car, and when cruising uses less energy than a microwave oven. The idea is that it will circle remote, currently unconnected areas of the planet, for flights that last as long as three months at a time. It connects users via laser and was developed in Bridgwater, Somerset. (Amazon’s drone programme is based in the UK too, near Cambridge. Our legal regime is pro-drone.) Even the most hardened Facebook sceptic has to be a little bit impressed by the ambition and energy. But the fact remains that the next two billion users are going to be hard to find.

That’s growth, which will mainly happen in the developing world. Here in the rich world, the focus is more on monetisation, and it’s in this area that I have to admit something which is probably already apparent. I am scared of Facebook. The company’s ambition, its ruthlessness, and its lack of a moral compass scare me. It goes back to that moment of its creation, Zuckerberg at his keyboard after a few drinks creating a website to compare people’s appearance, not for any real reason other than that he was able to do it. That’s the crucial thing about Facebook, the main thing which isn’t understood about its motivation: it does things because it can. Zuckerberg knows how to do something, and other people don’t, so he does it. Motivation of that type doesn’t work in the Hollywood version of life, so Aaron Sorkin had to give Zuck a motive to do with social aspiration and rejection. But that’s wrong, completely wrong. He isn’t motivated by that kind of garden-variety psychology. He does this because he can, and justifications about ‘connection’ and ‘community’ are ex post facto rationalisations. The drive is simpler and more basic. That’s why the impulse to growth has been so fundamental to the company, which is in many respects more like a virus than it is like a business. Grow and multiply and monetise. Why? There is no why. Because.

Automation and artificial intelligence are going to have a big impact in all kinds of worlds. These technologies are new and real and they are coming soon. Facebook is deeply interested in these trends. We don’t know where this is going, we don’t know what the social costs and consequences will be, we don’t know what will be the next area of life to be hollowed out, the next business model to be destroyed, the next company to go the way of Polaroid or the next business to go the way of journalism or the next set of tools and techniques to become available to the people who used Facebook to manipulate the elections of 2016. We just don’t know what’s next, but we know it’s likely to be consequential, and that a big part will be played by the world’s biggest social network. On the evidence of Facebook’s actions so far, it’s impossible to face this prospect without unease.

[1] When Google relaunched as Alphabet, ‘Don’t be evil’ was replaced as an official corporate code of conduct by ‘Do the right thing.’

[2] Note the ‘professed’. As Seth Stephens-Davidowitz points out in his new book Everybody Lies (Bloomsbury, £20), researchers have studied the difference between the language used on Google, where people tend to tell the truth because they are anonymously looking for answers, and the language used on Facebook, where people are projecting an image. On Facebook, the most common terms associated with the phrase ‘my husband is …’ are ‘the best’, ‘my best friend’, ‘amazing’, ‘the greatest’ and ‘so cute’. On Google, the top five are ‘amazing’, ‘a jerk’, ‘annoying’, ‘gay’ and ‘mean’. It would be interesting to know if there’s a husband out there who achieves the full Google set and is an amazing annoying mean gay jerk.

[3] One example of their work is Experian’s ‘Mosaic’ system of characterising consumer segments, which divides the population into 66 segments, from ‘Cafés and Catchments’ to ‘Penthouse Chic’, ‘Classic Grandparents’ and ‘Bus-Route Renters’.

[4] I should say that the information is hashed before it is exchanged, so that although the respective companies know everything about you and do share it, they do so in a pseudonymised form. Or a pseudo-pseudonymised form; there is an argument to be had about just how anonymous this form of anonymity actually is.

[5] The idea of one price for everyone is relatively recent. John Wanamaker gets the credit for having come up with the notion of fixed price tags in Philadelphia in 1861. The idea came from the Quakers, who thought that everyone should be treated equally.

[6] A study from 2015 in Computers in Human Behaviour, ‘Facebook Use, Envy and Depression among College Students: Is Facebooking Depressing?’ came to the answer no – except when the effects of envy were included, in which case the answer was yes. But since envious comparison is the entire Girardian basis of Facebook, that qualified ‘no’ looks an awful lot like a ‘yes’. A 2016 paper in Current Opinion in Psychiatry that studied ‘The Interplay between Facebook Use, Social Comparison, Envy and Depression’ found that Facebook use is linked to envy and depression, another discovery that would come as no surprise to Girard. A paper from 2013 in Plos One showed that ‘Facebook Use Predicts Declines in Subjective Well-Being in Young Adults’: in other words, Facebook makes young people sad. A 2016 paper in the journal Cyberpsychology, Behavior and Social Networking, entitled ‘The Facebook Experiment: Quitting Facebook Leads to Higher Levels of Well-Being’, found that Facebook makes people sad and that people were happier when they stopped using it.

Let's block ads! (Why?)

Read the whole story
windybank
66 days ago
reply
You
Cammeray, New South Wales, Australia
Share this story
Delete

How economists rode maths to become our era’s astrologers

1 Comment

Since the 2008 financial crisis, colleges and universities have faced increased pressure to identify essential disciplines, and cut the rest. In 2009, Washington State University announced it would eliminate the department of theatre and dance, the department of community and rural sociology, and the German major – the same year that the University of Louisiana at Lafayette ended its philosophy major. In 2012, Emory University in Atlanta did away with the visual arts department and its journalism programme. The cutbacks aren’t restricted to the humanities: in 2011, the state of Texas announced it would eliminate nearly half of its public undergraduate physics programmes. Even when there’s no downsizing, faculty salaries have been frozen and departmental budgets have shrunk.

But despite the funding crunch, it’s a bull market for academic economists. According to a 2015 sociological studyin the Journal of Economic Perspectives, the median salary of economics teachers in 2012 increased to $103,000 – nearly $30,000 more than sociologists. For the top 10 per cent of economists, that figure jumps to $160,000, higher than the next most lucrative academic discipline – engineering. These figures, stress the study’s authors, do not include other sources of income such as consulting fees for banks and hedge funds, which, as many learned from the documentary Inside Job (2010), are often substantial. (Ben Bernanke, a former academic economist and ex-chairman of the Federal Reserve, earns $200,000-$400,000 for a single appearance.)

Unlike engineers and chemists, economists cannot point to concrete objects – cell phones, plastic – to justify the high valuation of their discipline. Nor, in the case of financial economics and macroeconomics, can they point to the predictive power of their theories. Hedge funds employ cutting-edge economists who command princely fees, but routinely underperform index funds. Eight years ago, Warren Buffet made a 10-year, $1 million bet that a portfolio of hedge funds would lose to the S&P 500, and it looks like he’s going to collect. In 1998, a fund that boasted two Nobel Laureates as advisors collapsed, nearly causing a global financial crisis.

The failure of the field to predict the 2008 crisis has also been well-documented. In 2003, for example, only five years before the Great Recession, the Nobel Laureate Robert E Lucas Jr told the American Economic Association that ‘macroeconomics […] has succeeded: its central problem of depression prevention has been solved’. Short-term predictions fair little better – in April 2014, for instance, a survey of 67 economists yielded 100 per cent consensus: interest rates would rise over the next six months. Instead, they fell. A lot.

Nonetheless, surveys indicate that economists see their discipline as ‘the most scientific of the social sciences’. What is the basis of this collective faith, shared by universities, presidents and billionaires? Shouldn’t successful and powerful people be the first to spot the exaggerated worth of a discipline, and the least likely to pay for it?

In the hypothetical worlds of rational markets, where much of economic theory is set, perhaps. But real-world history tells a different story, of mathematical models masquerading as science and a public eager to buy them, mistaking elegant equations for empirical accuracy.

Sign up for Aeon’s Newsletter

As an extreme example, take the extraordinary success of Evangeline Adams, a turn-of-the-20th-century astrologer whose clients included the president of Prudential Insurance, two presidents of the New York Stock Exchange, the steel magnate Charles M Schwab, and the banker J P Morgan. To understand why titans of finance would consult Adams about the market, it is essential to recall that astrology used to be a technical discipline, requiring reams of astronomical data and mastery of specialised mathematical formulas. ‘An astrologer’ is, in fact, the Oxford English Dictionary’s second definition of ‘mathematician’. For centuries, mapping stars was the job of mathematicians, a job motivated and funded by the widespread belief that star-maps were good guides to earthly affairs. The best astrology required the best astronomy, and the best astronomy was done by mathematicians – exactly the kind of person whose authority might appeal to bankers and financiers.

In fact, when Adams was arrested in 1914 for violating a New York law against astrology, it was mathematics that eventually exonerated her. During the trial, her lawyer Clark L Jordan emphasised mathematics in order to distinguish his client’s practice from superstition, calling astrology ‘a mathematical or exact science’. Adams herself demonstrated this ‘scientific’ method by reading the astrological chart of the judge’s son. The judge was impressed: the plaintiff, he observed, went through a ‘mathematical process to get at her conclusions… I am satisfied that the element of fraud… is absent here.’

Romer compares debates among economists to those between 16th-century advocates of heliocentrism and geocentrism

The enchanting force of mathematics blinded the judge – and Adams’s prestigious clients – to the fact that astrology relies upon a highly unscientific premise, that the position of stars predicts personality traits and human affairs such as the economy. It is this enchanting force that explains the enduring popularity of financial astrology, even today. The historian Caley Horan at the Massachusetts Institute of Technology described to me how computing technology made financial astrology explode in the 1970s and ’80s. ‘Within the world of finance, there’s always a superstitious, quasi-spiritual trend to find meaning in markets,’ said Horan. ‘Technical analysts at big banks, they’re trying to find patterns in past market behaviour, so it’s not a leap for them to go to astrology.’ In 2000, USA Today quoted Robin Griffiths, the chief technical analyst at HSBC, the world’s third largest bank, saying that ‘most astrology stuff doesn’t check out, but some of it does’.

Ultimately, the problem isn’t with worshipping models of the stars, but rather with uncritical worship of the language used to model them, and nowhere is this more prevalent than in economics. The economist Paul Romer at New York University has recently begun calling attention to an issue he dubs ‘mathiness’ – first in the paper ‘Mathiness in the Theory of Economic Growth’ (2015) and then in a series of blog posts. Romer believes that macroeconomics, plagued by mathiness, is failing to progress as a true science should, and compares debates among economists to those between 16th-century advocates of heliocentrism and geocentrism. Mathematics, he acknowledges, can help economists to clarify their thinking and reasoning. But the ubiquity of mathematical theory in economics also has serious downsides: it creates a high barrier to entry for those who want to participate in the professional dialogue, and makes checking someone’s work excessively laborious. Worst of all, it imbues economic theory with unearned empirical authority.

‘I’ve come to the position that there should be a stronger bias against the use of math,’ Romer explained to me. ‘If somebody came and said: “Look, I have this Earth-changing insight about economics, but the only way I can express it is by making use of the quirks of the Latin language”, we’d say go to hell, unless they could convince us it was really essential. The burden of proof is on them.’

Right now, however, there is widespread bias in favour of using mathematics. The success of math-heavy disciplines such as physics and chemistry has granted mathematical formulas with decisive authoritative force. Lord Kelvin, the 19th-century mathematical physicist, expressed this quantitative obsession:

When you can measure what you are speaking about and express it in numbers you know something about it; but when you cannot measure it… in numbers, your knowledge is of a meagre and unsatisfactory kind.

The trouble with Kelvin’s statement is that measurement and mathematics do not guarantee the status of science – they guarantee only the semblance of science. When the presumptions or conclusions of a scientific theory are absurd or simply false, the theory ought to be questioned and, eventually, rejected. The discipline of economics, however, is presently so blinkered by the talismanic authority of mathematics that theories go overvalued and unchecked. 

Romer is not the first to elaborate the mathiness critique. In 1886, an article in Science accused economics of misusing the language of the physical sciences to conceal ‘emptiness behind a breastwork of mathematical formulas’. More recently, Deirdre N McCloskey’s The Rhetoric of Economics (1998) and Robert H Nelson’s Economics as Religion (2001) both argued that mathematics in economic theory serves, in McCloskey’s words, primarily to deliver the message ‘Look at how very scientific I am.’

After the Great Recession, the failure of economic science to protect our economy was once again impossible to ignore. In 2009, the Nobel Laureate Paul Krugman tried to explain it in The New York Times with a version of the mathiness diagnosis. ‘As I see it,’ he wrote, ‘the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth.’ Krugman named economists’ ‘desire… to show off their mathematical prowess’ as the ‘central cause of the profession’s failure’.

The mathiness critique isn’t limited to macroeconomics. In 2014, the Stanford financial economist Paul Pfleiderer published the paper ‘Chameleons: The Misuse of Theoretical Models in Finance and Economics’, which helped to inspire Romer’s understanding of mathiness. Pfleiderer called attention to the prevalence of ‘chameleons’ – economic models ‘with dubious connections to the real world’ that substitute ‘mathematical elegance’ for empirical accuracy. Like Romer, Pfleiderer wants economists to be transparent about this sleight of hand. ‘Modelling,’ he told me, ‘is now elevated to the point where things have validity just because you can come up with a model.’

The notion that an entire culture – not just a few eccentric financiers – could be bewitched by empty, extravagant theories might seem absurd. How could all those people, all that math, be mistaken? This was my own feeling as I began investigating mathiness and the shaky foundations of modern economic science. Yet, as a scholar of Chinese religion, it struck me that I’d seen this kind of mistake before, in ancient Chinese attitudes towards the astral sciences. Back then, governments invested incredible amounts of money in mathematical models of the stars. To evaluate those models, government officials had to rely on a small cadre of experts who actually understood the mathematics – experts riven by ideological differences, who couldn’t even agree on how to test their models. And, of course, despite collective faith that these models would improve the fate of the Chinese people, they did not.

Astral Science in Early Imperial China, a forthcoming book by the historian Daniel P Morgan, shows that in ancient China, as in the Western world, the most valuable type of mathematics was devoted to the realm of divinity – to the sky, in their case (and to the market, in ours). Just as astrology and mathematics were once synonymous in the West, the Chinese spoke of li, the science of calendrics, which early dictionaries also glossed as ‘calculation’, ‘numbers’ and ‘order’. Li models, like macroeconomic theories, were considered essential to good governance. In the classic Book of Documents, the legendary sage king Yao transfers the throne to his successor with mention of a single duty: ‘Yao said: “Oh thou, Shun! The li numbers of heaven rest in thy person.”’

China’s oldest mathematical text invokes astronomy and divine kingship in its very title – The Arithmetical Classic of the Gnomon of the Zhou. The title’s inclusion of ‘Zhou’ recalls the mythic Eden of the Western Zhou dynasty (1045–771 BCE), implying that paradise on Earth can be realised through proper calculation. The book’s introduction to the Pythagorean theorem asserts that ‘the methods used by Yu the Great in governing the world were derived from these numbers’. It was an unquestioned article of faith: the mathematical patterns that govern the stars also govern the world. Faith in a divine, invisible hand, made visible by mathematics. No wonder that a newly discovered text fragment from 200 BCE extolls the virtues of mathematics over the humanities. In it, a student asks his teacher whether he should spend more time learning speech or numbers. His teacher replies: ‘If my good sir cannot fathom both at once, then abandon speech and fathom numbers, [for] numbers can speak, [but] speech cannot number.’

Modern governments, universities and businesses underwrite the production of economic theory with huge amounts of capital. The same was true for li production in ancient China. The emperor – the ‘Son of Heaven’ – spent astronomical sums refining mathematical models of the stars. Take the armillary sphere, such as the two-metre cage of graduated bronze rings in Nanjing, made to represent the celestial sphere and used to visualise data in three-dimensions. As Morgan emphasises, the sphere was literally made of money. Bronze being the basis of the currency, governments were smelting cash by the metric ton to pour it into li. A divine, mathematical world-engine, built of cash, sanctifying the powers that be.

The enormous investment in li depended on a huge assumption: that good government, successful rituals and agricultural productivity all depended upon the accuracy of li. But there were, in fact, no practical advantages to the continued refinement of li models. The calendar rounded off decimal points such that the difference between two models, hotly contested in theory, didn’t matter to the final product. The work of selecting auspicious days for imperial ceremonies thus benefited only in appearance from mathematical rigour. And of course the comets, plagues and earthquakes that these ceremonies promised to avert kept on coming. Farmers, for their part, went about business as usual. Occasional governmental efforts to scientifically micromanage farm life in different climes using li ended in famine and mass migration.

Like many economic models today, li models were less important to practical affairs than their creators (and consumers) thought them to be. And, like today, only a few people could understand them. In 101 BCE, Emperor Wudi tasked high-level bureaucrats – including the Great Director of the Stars – with creating a new li that would glorify the beginning of his path to immortality. The bureaucrats refused the task because ‘they couldn’t do the math’, and recommended the emperor outsource it to experts.

The equivalent in economic theory might be to grant a model high points for success in predicting short-term markets, while failing to deduct for missing the Great Recession

The debates of these ancient li experts bear a striking resemblance to those of present-day economists. In 223 CE, a petition was submitted to the emperor asking him to approve tests of a new li model developed by the assistant director of the astronomical office, a man named Han Yi.

At the time of the petition, Han Yi’s model, and its competitor, the so-called Supernal Icon, had already been subjected to three years of ‘reference’, ‘comparison’ and ‘exchange’. Still, no one could agree which one was better. Nor, for that matter, was there any agreement on how they should be tested.

In the end, a live trial involving the prediction of eclipses and heliacal risings was used to settle the debate. With the benefit of hindsight, we can see this trial was seriously flawed. The helical rising (first visibility) of planets depends on non-mathematical factors such as eyesight and atmospheric conditions. That’s not to mention the scoring of the trial, which was modelled on archery competitions. Archers scored points for proximity to the bullseye, with no consideration for overall accuracy. The equivalent in economic theory might be to grant a model high points for success in predicting short-term markets, while failing to deduct for missing the Great Recession.

None of this is to say that li models were useless or inherently unscientific. For the most part, li experts were genuine mathematical virtuosos who valued the integrity of their discipline. Despite being based on inaccurate assumptions – that the Earth was at the centre of the cosmos – their models really did work to predict celestial motions. Imperfect though the live trial might have been, it indicates that superior predictive power was a theory’s most important virtue. All of this is consistent with real science, and Chinese astronomy progressed as a science, until it reached the limits imposed by its assumptions.

However, there was no science to the belief that accurate li would improve the outcome of rituals, agriculture or government policy. No science to the Hall of Light, a temple for the emperor built on the model of a magic square. There, by numeric ritual gesture, the Son of Heaven was thought to channel the invisible order of heaven for the prosperity of man. This was quasi-theology, the belief that heavenly patterns – mathematical patterns – could be used to model every event in the natural world, in politics, even the body. Macro- and microcosm were scaled reflections of one another, yin and yang in a unifying, salvific mathematical vision. The expensive gadgets, the personnel, the bureaucracy, the debates, the competition – all of this testified to the divinely authoritative power of mathematics. The result, then as now, was overvaluation of mathematical models based on unscientific exaggerations of their utility.

In ancient China it would have been unfair to blame li experts for the pseudoscientific exploitation of their theories. These men had no way to evaluate the scientific merits of assumptions and theories – ‘science’, in a formalised, post-Enlightenment sense, didn’t really exist. But today it is possible to distinguish, albeit roughly, science from pseudoscience, astronomy from astrology. Hypothetical theories, whether those of economists or conspiracists, aren’t inherently pseudoscientific. Conspiracy theories can be diverting – even instructive – flights of fancy. They become pseudoscience only when promoted from fiction to fact without sufficient evidence.

Romer believes that fellow economists know the truth about their discipline, but don’t want to admit it. ‘If you get people to lower their shield, they’ll tell you it’s a big game they’re playing,’ he told me. ‘They’ll say: “Paul, you may be right, but this makes us look really bad, and it’s going to make it hard for us to recruit young people.”’

Demanding more honesty seems reasonable, but it presumes that economists understand the tenuous relationship between mathematical models and scientific legitimacy. In fact, many assume the connection is obvious – just as in ancient China, the connection between li and the world was taken for granted. When reflecting in 1999 on what makes economics more scientific than the other social sciences, the Harvard economist Richard B Freeman explained that economics ‘attracts stronger students than [political science or sociology], and our courses are more mathematically demanding’. In Lives of the Laureates (2004), Robert E Lucas Jr writes rhapsodically about the importance of mathematics: ‘Economic theory is mathematical analysis. Everything else is just pictures and talk.’ Lucas’s veneration of mathematics leads him to adopt a method that can only be described as a subversion of empirical science:

The construction of theoretical models is our way to bring order to the way we think about the world, but the process necessarily involves ignoring some evidence or alternative theories – setting them aside. That can be hard to do – facts are facts – and sometimes my unconscious mind carries out the abstraction for me: I simply fail to see some of the data or some alternative theory.

Even for those who agree with Romer, conflict of interest still poses a problem. Why would skeptical astronomers question the emperor’s faith in their models? In a phone conversation, Daniel Hausman, a philosopher of economics at the University of Wisconsin, put it bluntly: ‘If you reject the power of theory, you demote economists from their thrones. They don’t want to become like sociologists.’

George F DeMartino, an economist and an ethicist at the University of Denver, frames the issue in economic terms. ‘The interest of the profession is in pursuing its analysis in a language that’s inaccessible to laypeople and even some economists,’ he explained to me. ‘What we’ve done is monopolise this kind of expertise, and we of all people know how that gives us power.’

Every economist I interviewed agreed that conflicts of interest were highly problematic for the scientific integrity of their field – but only tenured ones were willing to go on the record. ‘In economics and finance, if I’m trying to decide whether I’m going to write something favourable or unfavourable to bankers, well, if it’s favourable that might get me a dinner in Manhattan with movers and shakers,’ Pfleiderer said to me. ‘I’ve written articles that wouldn’t curry favour with bankers but I did that when I had tenure.’

when mathematical theory is the ultimate arbiter of truth, it becomes difficult to see the difference between science and pseudoscience

Then there’s the additional problem of sunk-cost bias. If you’ve invested in an armillary sphere, it’s painful to admit that it doesn’t perform as advertised. When confronted with their profession’s lack of predictive accuracy, some economists find it difficult to admit the truth. Easier, instead, to double down, like the economist John H Cochrane at the University of Chicago. The problem isn’t too much mathematics, he writes in response to Krugman’s 2009 post-Great-Recession mea culpa for the field, but rather ‘that we don’t have enough math’. Astrology doesn’t work, sure, but only because the armillary sphere isn’t big enough and the equations aren’t good enough.

If overhauling economics depended solely on economists, then mathiness, conflict of interest and sunk-cost bias could easily prove insurmountable. Fortunately, non-experts also participate in the market for economic theory. If people remain enchanted by PhDs and Nobel Prizes awarded for the production of complicated mathematical theories, those theories will remain valuable. If they become disenchanted, the value will drop.

Economists who rationalise their discipline’s value can be convincing, especially with prestige and mathiness on their side. But there’s no reason to keep believing them. The pejorative verb ‘rationalise’ itself warns of mathiness, reminding us that we often deceive each other by making prior convictions, biases and ideological positions look ‘rational’, a word that confuses truth with mathematical reasoning. To be rational is, simply, to think in ratios, like the ratios that govern the geometry of the stars. Yet when mathematical theory is the ultimate arbiter of truth, it becomes difficult to see the difference between science and pseudoscience. The result is people like the judge in Evangeline Adams’s trial, or the Son of Heaven in ancient China, who trust the mathematical exactitude of theories without considering their performance – that is, who confuse math with science, rationality with reality.

There is no longer any excuse for making the same mistake with economic theory. For more than a century, the public has been warned, and the way forward is clear. It’s time to stop wasting our money and recognise the high priests for what they really are: gifted social scientists who excel at producing mathematical explanations of economies, but who fail, like astrologers before them, at prophecy.

Syndicate this Essay

Let's block ads! (Why?)

Read the whole story
windybank
91 days ago
reply
This. Economists might be mathematically ok one day. But not yet. Social science is an incredibly hard problem. We can only just park a car.
Cammeray, New South Wales, Australia
Share this story
Delete

Loudness

1 Comment

One of the most contentious issues in CD mastering these days is relative loudness. The tendency to try and make CDs as loud as possible in the mastering stage (and increasingly even during mixing) has become so common that it's viewed by many people today as what "mastering" is. However, many are unaware of the major tradeoffs in sonic quality introduced in the quest to be the loudest track on the shuffle playlist. Hopefully, some background information on this aspect of the mastering process will leave you better able to make the right decisions for your project.

Decibels

The decibel (1/10th of a Bel, named after Alexander Graham Bell) is a relative unit of measure expressing a ratio between some reference level and the level you are measuring. Since it is a ratio and not an absolute measurement, it can mean a lot of different things and always needs a reference suffix to be a useful number. Some common decibel readings used in various audio fields are dBSPL (for sound pressure levels traveling through air), dBu (for electrical levels in Volts), dBm (for power levels in Watts), dBVU (volume units, the standard measurement of electrical audio levels on mixing consoles and analog tape machines), and dBFS (or full scale, the standard measurement level for digital audio levels). The latter two are the most relied upon in audio mixing and mastering.

Analog Recording Levels

The major shortcoming of analog recording systems, historically, was always the noise floor of the storage or playback medium, such as tape hiss or surface noise (crackles and pops) on vinyl records. All analog recording and playback media have some sort of inherent noise (though today it is often very low). Design engineers continuously tried to get as far away from this noise floor as possible by achieving greater headroom, or higher output levels, from various recording formats. The result was that the program material was more clearly audible and farther above the noise. Once clarity and audibility stopped being the main problem, greater headroom was typically used as a sort of "reserve level" so that peaks in program material could be accurately represented without distorting. The main limitations on how loud one could make a piece of music were either excessive distortion on analog tape and vinyl above a certain level, or the ability of a needle to track a record groove in a playable fashion.

Digital

With the advent of digital recording and playback technologies in the early 1980s, the primary perceived advantage was the tremendous increase in dynamic range and headroom due to a greatly lowered noise floor. Binary digits have no inherent hiss or crackle (in fact, the CD playback format actually required the introduction of a small amount of randomized noise, known as dither, to cover up the distortion created by the lowest audio bit switching on and off), so most audio engineers believed that they would now have the ability to allow peak levels in music to occasionally be much higher than the average level. This would then lessen the need for compression and limiting, (which reduce the level of peaks so that the average level can be raised up further away from the formerly problematic noise floor).

In digital audio, an instantaneous moment of sound is described by a string of ones and zeroes, and there is a limit to the loudest signal that can be described numerically (for cd audio, that's a string of 16 ones). A different kind of dB scale was needed to take this into account. This is known as dBFS or full scale. As opposed to the VU scale, in which zero on the scale is the average operating level which program swings above and below, zero in dBFS is the absolute highest level allowed. It is the very top end of the scale and all usable audio program falls below it. The dBFS scale uses negative numbers to represent audio program level below the maximum zero. In studios with both types of metering present, a point on the negative dBFS scale would be correlated with a point on the dBVU scale. Typically this is something like -20 dBFS = 0 dBVU, so that "0 dB" on a VU meter would leave approximately 20dB of headroom for signal peaks on the dBFS scale.

There are 2 different types of ballistics (or response times) used with dBFS metering. The first is peak level: a very fast response used to see the highest instantaneous signal peaks. Many mastering engineers choose either -.1 dBFS to -.3 dBFS as the level at which the highest peaks should remain at or below (this small amount of headroom is left to compensate for intersample peaks, where the top of the arc of a waveform described by two adjacent samples can sometimes create a signal of higher level than each single sample represents). The other form of ballistics more commonly used as a reference for CD loudness is RMS or "root mean squared" metering. This is a way of averaging level over a longer period of time that is similar to the ballistics of a mechanical VU (volume units) meter. It corresponds closely to the human perception of loudness. So if you play a whole CD track while watching it on a typical digital meter, you will get a peak level that might reach anywhere from -.5 dBFS to 0dBFS, as well as an RMS level that is lower. The distance between the peak levels and the RMS levels in a song is where the big changes have occurred in the last decade. The peak levels can't get any louder. They are already at 0dBFS. But the RMS levels, which correspond to loudness or volume, have been creeping up.

Take a listen to any CD from the mid 80s to the early 90s and you'll find RMS levels that are usually somewhere between -18 dBFS and -12 dBFS. Two examples from this time span are represented as waveform displays below:

My Bloody Valentine—"Only Shallow" (Loveless, Sire Records, 1990)

No mastering credit. Highest average RMS level -17.3 dBFS / Maximum peak level -4.2 dBFS:

Nirvana—"Heart Shaped Box" (In Utero, DGC Records, 1993)

Bob Ludwig, Gateway Mastering. Highest average RMS level -12.7 dBFS / Maximum peak level -0.2 dBFS:

These two examples show a few different things:

First off, for many music fans, the "Loveless" album from My Bloody Valentine is one of the hallmarks of a "huge" sounding record. It is a tremendous wall of sound. The punchline is that it is one of the quietest CDs (in terms of RMS level) even in a relatively "quiet" period of CD mastering. One could easily get another 4 dB of volume out of this CD without even beginning to limit the peaks.

Look at the rectangular window as a box you can fill up with sound (the waveform). The top and bottom of the box represents the maximum level allowed: 0dBFS. The average level of the music can be seen as the area where the waveform is solid black. When the solid area gets thicker, the average level goes up and when it gets thinner, the level goes down.

The jagged bits and spikes coming off the solid area are the peaks. The more white background you can see peaking through the black waveform, the bigger the distance between the peak level and the average level.

You can see in the MBV track that the peaks don't even come anywhere near the maximum allowed level, and although the waveform is dense (due to the density of the music) it has a shape that varies and the peaks stick up at random heights out of the average area. The changing shape is a visual representation of the dynamics in the music. It's easy to see that the concern for this CD was simply for how it sounded and for retaining the dynamics, not for making it louder by filling the box and bringing the peaks down into the area of the average level. No one seemed bothered by this when it was originally released. People simply turned it up to the volume they found appropriate at home.

Next is a CD that was near the loud end of the spectrum in its day. "In Utero" can be placed around the beginning of the current era of inflated loudness. Using the "box of sound" analogy again, you can see that the goal is beginning to be to try to use all the area available by shaving all the peaks off at the same level (peak limiting) and then turning the level up until those peaks almost hit the edges of the box. This track subjectively seems quite a bit louder than the MBV track, but still has a musical sound and a fair amount of dynamic range. You can still see a changing shape in the average level, and plenty of white space between the average area and the peaks.

Next, look at an example that illustrates the current trend in CD mastering:

Radiohead—"Dollars And Cents" (Amnesiac, Capitol Records 2001)

Bob Ludwig, Gateway Mastering. Highest average RMS level -6.3 dBFS / Maximum peak level -0.09 dBFS:

Here you can see that the box of sound is pretty well filled up! Both the peaks and much of the average material go right up to the edge for much of the song. The denser the black area is, the less distance between the peak and average levels. So the average level is much louder, and the overall dynamics of the song are less apparent. In the case of this Radiohead album, it is done in a way that is still shockingly transparent to the music (compared with what you see here) and "Amnesiac" is a remarkably good sounding CD (more on this later). But, if you consider that the same engineer mastered the Nirvana album shown above only 8 years earlier, you can see how other considerations, aside from simply the best sound for the material, have entered the picture. Filling the box is becoming increasingly important (to some!).

Finally, take a look at how things have changed with two releases of the same CD seven years apart. Admittedly, the example below has a bit of a wild-card thrown in (namely the artist, Iggy Pop, in the liner notes for the re-release takes full credit for the sonic decisions in the newer version), but it's a startling example of how the acceptable delivery of modern music has changed:

Stooges—Search and Destroy (Raw Power, Columbia Records, 1990 CD release)

No mastering credit. Highest average RMS level -13.9 dBFS / Maximum peak level -1.7 dBFS:

Stooges—Search and Destroy (Raw Power, Sony Records, 1997 remastered CD release)

Sony Mastering. Highest average RMS level -2.58 dBFS / Max peak level 0.0 dBFS (constant digital overs):

THIS BOX IS FULL!!!!!!!!!! There is a difference of less than 3 decibels between the loudest average part of this track and the loudest digital word that can be represented as sound. This is what we call a true sausage. This record is shockingly loud, but also just shocking. The volume in this case has been achieved with almost constant clipping of the original waveforms. Which brings us to:

Distortion!

There are a number of ways to reduce the difference between the loudest and quietest parts of the music (or reducing the dynamic range), which is what is being done to music on a "louder" CD. Here are some brief descriptions of the most common among them:

  • COMPRESSION is a process in which you choose a threshold at a given volume or level and tell the device that beyond this threshold "x," the gain ratio will be increased so that for a given increase of "y" decibels at the input, the output level will only be "z" decibels louder. This ends up being expressed in a ratio such as 2:1 (where 2 is y and 1 is z) meaning that for every 2 decibels above the threshold level at the input to the compressor, the output will only increase by 1 decibel. This has the result of taking the louder parts and making them come out quieter, allowing you to turn the whole thing up at the output stage since now the previously loudest parts are compressed downward.
  • LIMITING is simply compression at a very high ratio (generally between 10:1 and ∞:1, or infinity to one). This sort of high-ratio compression was originally used in radio broadcasting where a live broadcast had to accommodate a wide range of levels with the certainty that nothing would overload. No matter what level the input is above the threshold, the output level either varies only very slightly or not at all.
  • DIGITAL "BRICKWALL" LIMITING is a more recent development. This process can be provided by either software plug-ins or a dedicated hardware box. These limiters process digital information rather than an analog electrical signal, introducing the ability to "look ahead" by buffering (or temporarily storing) the data so that they can effectively plan ahead on how to limit the signal peak before it happens. Used modestly, these devices can raise gain quite a bit by limiting a small amount of peaks, sacrificing a few dynamic moments for a safe (digitally speaking) overall gain increase. However, many engineers certainly use brickwall limiting very immodestly, creating noticeable distortion artifacts and drastically changing the balance of the mix.
  • CLIPPING or purposefully overloading the inputs of analog to digital (A/D) converters is the latest in "loudening" technology. Some A/D converter devices can handle this questionable use better than others, but in all cases the result is that the peak of a waveform is simply lopped off. This is probably the most dubious way to remove the dynamics from music. The Stooges remaster displayed above is probably the most dramatic example of this technique, and it is certainly the most ear-fatigue-inducing way to get things loud. Below are two closeups of waveforms from some of the examples shown previously.

Radiohead—"Dollars and Cents":

In this example, a very loud but musical sounding master was achieved through a layered approach to compression that probably began during tracking, continued through the mixing, and was finished off in mastering. There are no clipped waveforms here as shown above, and the result is quite loud but not terribly crushed or distorted music.

Stooges (remaster)—Search And Destroy:

In this example, you can see most vividly how the waveforms are so clipped that large areas are simply flatlines where musical detail used to be. There are large amounts of music that simply disappear into the non-existent area above 0dBFS. All these flat lines sound like crunchy, your-stereo-is-broken type-stuff. Some people like it. Fortunately, not everyone does (yet?)!

Summing Up

The overall point of this is that there's still no free lunch. The reason CDs were quieter in the past was that it took a while for it to occur to people to try to hijack the volume knob from listeners. People spent a long time mixing their music to sound just the way they wanted it. Typically, they didn't want someone to take that music and make radical or drastic changes to it after hearing it only a handful of times in a mastering session. The job of the mastering engineer was just to balance out any inconsistencies and transfer it to the delivery medium.

In this age, we all do tend to listen to music in much noisier environments and generally, perhaps, pay less attention to the music we hear. In such an environment, it is tempting to try to make your music "shout-out" the loudest. However, the only way to blast into people's ears louder than the last song is to introduce sonic sacrifices to your original mixes to achieve this goal. Much of today's modern music can certainly jump out at you from even the tinniest of computer speakers, but often doesn't stand up to any serious scrutiny on a good full-range playback system. And it's often chock full of pumping compression, distortion and other ear-fatiguing artifacts. Highly compressed or limited music with no dynamic range is physically difficult to listen to for any length of time. This "hearing fatigue" doesn't present itself as obviously aching muscles, like other forms of physical fatigue, so it's not obvious to the listener that he or she is being affected. But if you ever wonder why you don't like modern music as much as older recordings, or why you don't like to listen to it for long periods of time (much less over the years), this physical and mental hearing fatigue is a big part of the reason.

The situation has gotten so out of hand that there is now a feature in iTunes called "Soundcheck" that goes through your whole library and analyzes the average volume of all the songs, making a change to the metadata associated with the loudest tracks that tells the player to play them at a lower volume. This is a pretty imperfect solution (often, ironically, it leaves sparse or acoustic music at much higher levels than thick, rock music), but is an attempt to mitigate the unsettling and sometimes dangerous (depending on how loud you listen to music) level differences that exist in digitally delivered music today.

There is a happy medium for most projects using the powerful tools available to manage gain and dynamic range in mastering. Familiarize yourself and your mastering engineer with a few examples of music you believe sounds good and bad. This can be the best tool to communicate your sonic preferences and to help your album reach its fullest potential while preserving the important sonic decisions made during the arrangement and mixing stages.

LOUDNESS RELATED LINKS:

WNYC's Soundcheck: Drowning In Sound
(This radio program covers some interesting material, but contains a number of factual errors from one guest discussing
radio play. For more on that, please see the article below regarding radio processing.)
Rolling Stone: The Death Of High Fidelity
UK Guardian article on CD sound and loudness
Digital Distortion White Paper
What Happens to My Recording When it's Played on the Radio?
Turn Me Up! (A new organization promoting standards for dynamic range in commercially released recordings)

Let's block ads! (Why?)

Read the whole story
windybank
104 days ago
reply
sad
Cammeray, New South Wales, Australia
Share this story
Delete

Pot Head

1 Comment and 3 Shares

Read the whole story
windybank
107 days ago
reply
Florida?
Cammeray, New South Wales, Australia
Share this story
Delete
Next Page of Stories