For obvious reasons, we’ve all been a bit cyclone-fixated this past week, while the rest of the world has kept ticking over regardless. For example: There have been more protests by indigenous rural communities in Peru against a coup that’s been led by the wealthy urban elites; the West has pushed Iran even further into the arms of China; and Russia is unfurling a sneaky strategy that’s intended to create a whole new sphere of Kremlin influence in Africa. And despite Starlink being our connectivity saviour during the cyclone aftermath, this doesn’t mean that Elon Musk is any less of a monster.
Oh, and Finland faces an election on April 21. Remember Finnish PM Sanna Marin, and the idiotic questioning to which Marin and Jacinda Ardern were subjected a few months ago by a New Zealand journalist? Well, Marin is currently running second in the polls, four points behind her centre-right opponent Petteri Orpo:
Opposition leader Orpo recently suggested that rising debt risked undermining Finnish welfare provision and said the country needed to “wake up to what government indifference to debt was leading to.” In her campaign launch, Marin said closing “tax loopholes” would ensure the economy remained healthy. This could include higher taxes on capital and inheritance.
The Finns’ sudden panic about debt looks like a beat-up. Finland’s most recent debt to GDP ratio has risen by only two points – to 70.9 percent- compared to where it was in the same quarter a year ago. That hardly signifies the end of the world. In New Zealand, our government debt to GDP ratio is freakishly low, as these IMF figures indicate:
The IMF’s general government net debt indicator shows New Zealand’s debt at 21.3 percent of GDP in 2023, compared to 31.6 percent in Canada, 40.7 percent in Australia, 71.3 percent in the UK and 94.9 percent in the US. This illustrates that New Zealand’s net debt ceiling is set at a conservatively low level compared to the net debt of our international peers.
That’s why Finance Minister Grant Robertson could readily borrow the money to rebuild and future-proof our infrastructure against the tropical cyclones likely to be sent our way in future by climate change. Not that this column is about the cyclone. Instead…
The Internet is in peril!
This week, the US Supreme Court begins hearing oral arguments on a couple of cases that – by the time the Court’s judgement gets released in about (my best guess) June 2023– could change the nature of the Internet. Basically, the cases could expose Internet platforms and search engines to legal action over the content they carry, and how they manage access to it.
Meaning: If the US Supreme Court gets this wrong, the likes of Elon Musk could find it far easier to set loose his legal hounds on anyone who says stuff they don’t like… And that legal action could well be directed at the platform that carried the content, and the search engine that brought people to the site. That’s not all of it. Republicans already think the Net is a liberal conspiracy that’s biased against them. So they’re looking for the power to legally compel sites to carry more right wing content, and to give it greater prominence. We could well end up with forced speech, under the guise of free speech.
You can get a lively, accurate introduction to the legal issues and the social freedoms at stake by spending 20 minutes with this guy:
Taking aim at section 230
At the heart of the matter is the fate of the famous section 230 ‘safe harbour’ provisions contained in the US Communications Decency Act of 1996. As many people have argued, this is the most important statute in the history of the Internet.
Basically, section 230 protects Internet companies (a) from legal liability for the content they carry and (b) from any inference that taking it down later was an admission of liability in the first place. BTW, content that’s related to certain federal crimes (eg child pornography) are not protected by section 230. That federal law exemption will be central to the cases now before the Supreme Court.
Surprising as it may be at first glance, the original aim of passing section 230 back in 1996 was not to give Internet companies carte blanche. Instead it was a recognition that it is physically impossible for an Internet platform to pre-moderate everything they carry, given the millions of items published every day. That being so, the main aim of section 230 was to incentivise the platforms to comply with a “notice and takedown” system of taking down harmful/libellous/copyright infringing material, once they’ve been notified of its existence. Obviously, this is an imperfect solution, but one that’s better than any of the alternatives on offer so far.
No doubt, all of us can think of stuff that gets posted online that we wish wasn’t there. Yet outside of a few socially backward states in the USA, we’ve got beyond trying to punish people for the uses to which third parties put the material that they carry. We operate a system of complaints (and Chief Censor’s rulings) on the problematic material at the social margins, and more or less, this functions as an analogue version of a “notice and takedown” system. Thankfully, we no longer sue and/or prosecute bookshops and libraries over the material they stock.
By the same logic, we don’t try to sue Toyota because the Taliban had a tendency to use Toyota Hiluxes to wage war all over Afghanistan. Arguably, people – not their tools – are best held responsible for the bad stuff they do.
It is not entirely that simple though. For a contrary example: should people be able to sue gun-makers for the harmful ends to which some people put their products? Oddly, the leftists who say “ yes” to that question also tend to be the ones who say “no” to the scrapping of legal ‘safe harbours” for the content upon media platforms. (True, some liberal Democrats do want to scrap section 230. They appear to have no idea of the likely consequences if they ever succeeded.)
Just as paradoxically, the right wingers who most want to scrap the section 230 protections enjoyed by Youtube, Google and Twitter also tend to be just as fervent about keeping gun-makers like Smith & Wesson and Remington safe from legal liability over what some people do with their products. But consistency, so they say, is the hobgoblin of small minds.
All of that aside…. what are the two cases that the Supreme Court will be scrutinising this week? Here are the Scotus blog breakdowns of Gonzalez v Google, and also of Twitter v Taamneh. Both cases are to do with the victims of terrorists. Nohemi Gonzalez was among the 130 people killed by jihadis in Paris in 2015. Nawras Alassar (whose family brought the initial Taamneh case) was among 27 people killed in an Istanbul nightclub in 2017.
Neither case involves liability for online content per se – in the Gonzalez case the argument is that the Google algorithm recommended content that the terrorists made use of, while in the Taamneh case the claim is that Twitter “aided and abetted” the terrorists. The argument being put is that “recommending” and “aiding and abetting” do not qualify for section 230 protections. Here’s the gist of the Gonzalez case:
Whether Section 230(c)(1) of the Communications Decency Act immunizes interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limits the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information.
The lower courts tossed out Gonzalez, and were almost as dismissive of Taamneh. The 9th Circuit appellate court threw both cases a lifeline, but in different ways. In effect, the lower courts had ruled (correctly, IMO) that a search engine can’t – merely by recommending something being hosted – thereby lose all their existing legal protections with respect to that hosted content. As free speech expert Mike Masnick explained on his Techdirt website:
The whole point of Section 230 is to put the liability on the proper party: the one actually speaking. Making sites liable for recommendations creates all of the same problems that making them liable for hosting would — specifically, requiring them to take on liability for content they couldn’t possibly thoroughly vet before recommending it. A ruling in favour of Gonzalez would create huge problems for anyone offering search on any website, because a “bad” content recommendation could lead to liability, not for the actual content provider, but for the search engine. That can’t be the law, because that would make search next to impossible.
As for the Taamneh case, the lower courts initially threw this out for failing to adequately make a valid “aiding and abetting” argument, even before the litigants got to raising any section 230 grounds. When directed by the 9th Circuit to reconsider, the lower court eventually decided that the “aiding and abetting” grounds could be plead, and it has been Twitter that has taken that ruling to the Supreme Court. So to repeat: Taamneh is not strictly a section 230 case, not that this will probably matter much to the Supremes.
So far, the best summary I’ve read of the issues in play and the legal arguments likely to be raised before the Court is this article, by Jess Miers. The entire piece is essential reading, but – for instance – I liked this bit of her analysis which deals with whether an algorithmic recommendation qualifies as an interaction, for which the user bears responsibility, and especially so if the recommendations are engaged with over a period of time:
As discussed in the amicus brief by The Copia Institute et al., recommendations and the algorithms that drive them are not magic. In order for YouTube to recommend relevant content to a user, the user must interact with the service. In that case, the user is indirectly requesting recommendations by continuing to engage with the service. [As the Copia brief says]:
“In reality algorithms need not be complex: simply listing in chronological or alphabetical order is an algorithmic rendering. It is also important to remember, especially here, is that what is at issue is not some sort of foreign magic but tools of varying complexity that humans deliberately choose to employ as suits their expressive interests.”
As Miers says, recommendations are not magically imposed. Clicking on them is a choice, and liability rests with the user for making that choice. The act of carrying the online content is also – within certain parameters – legally protected speech, for good and practical reasons. As Vox News suggests, both cases will have to rely heavily on the “federal crimes” exemption I mentioned earlier, whereby “any national of the United States” who is injured by an act of international terrorism can sue anyone who “aids and abets, by knowingly providing substantial assistance” to anyone who commits “such an act of international terrorism.”
Yet… Can the fact that the terrorists in Paris and in Istanbul used Google and Twitter for a variety of purposes be construed to mean that Google and Twitter should be held criminally liable for the tragic uses to which their services were put? Again, these are tools, not actors. As Vox News says, even if Google could somehow succeed in altering its search algorithm such that it could detect Islamic State jihadists or lone wolf shooters searching online via Google before they acted on their beliefs – which seems unlikely – it is not hard to imagine how that kind of surveillance tool might be commandeered and used by the rulers of authoritarian states:
Imagine a world, for example, where India’s Hindu nationalist prime minister Narendra Modi can require Google to turn such a surveillance apparatus against peaceful Muslim political activists as a condition of doing business in India.
There are nine judges on the US Supreme Court bench. Clarence Thomas and Sanuel Alito have made known their hostility to section 230. Amy Coney Barrett is also likely to vote with them. As a counter, various calculations have treated Brett Kavanaugh and John Roberts as likely to be defenders of free speech and of the business models of Internet companies.
BTW, the fact that the 9th Circuit appellate court based in San Francisco found little or no merit in the section 230 grounds for these two cases is not very re-assuring– given that the Supreme Court routinely overturns 9th Circuit rulings at a significantly higher rate than the rulings made by the other appellate courts.
Most of the guesswork about how the final votes may fall tend to assume that the liberal bloc of justices (Sotomayor, Kagan, Jackson) will stay solid behind section 230, but that’s not a given. Not when so many liberals and Democratic Party heavyweights like Any Klobuchar routinely decry the dreadful stuff available online, and the unbridled power of Big Tech. Probably, the Democratic Party also sees section 230 reform as a way of bringing Google, Facebook and Twitter to heel, without all the trouble of launching anti-trust cases against their market dominance.
Yet as Jess Miers points out, a Supreme Court ruling that scraps or limits section 230 protections could have devastating social impacts on the reproductive rights of women – and also, one assumes, also on the civil rights of trans people:
For example, an adverse decision in this case will detrimentally impact the availability of reproductive healthcare information for women living in states with anti-abortion laws. As we noted in our letter to AG Garland last year:
“Should the Court curb Section 230’s protections for algorithmic curation, online services would face extreme threats of liability for promoting life-saving reproductive health information, otherwise criminalized by state anti-abortion laws.
Finally… As Mike Masnick concludes, it is entirely possible that the outcome could be a 6-3 rejection of the two cases.Yet given the tragic context for these cases – two innocent young people gunned down while out for a night on the town – a middle course might be charted by a Roberts court sensitive to the political dimensions of the cases. Roberts, Masnick says, could attempt to steer the majority decision to a clever clogs middle course whereby Google “ recommendations” are separated out from section 230 protection, even if only under a tighter set of legal conditions.
Unfortunately, that would quickly become a one way street. Essentially, eroding the section 230 protections is not the way to lessen the impact of potentially harmful online content. Amongst a lot of other negative outcomes, such a middle course would leave the statute vulnerable to further legal attacks that, as Masnick says, are already waiting in the wings.
Why would all that be a bad thing? Well, if the legal liability for online content is expanded, only those able to afford to defend their content in court will be safe. Free speech would contract under the inhibiting threat of legal action from people, firms and political parties out to (a) enhance their political power, and (b) to reap the financial gains possible from the subsequent breakup of the Internet into paywalled segments.
All of us, in other words, have quite a lot riding on how the Supreme Court decides these two cases.
Footnote: Obviously, US courts do not have jurisdiction here. Yet just as obviously, much of the Internet’s core infrastructure and services are owned by US companies, so the ripple effects would be felt here. Moreover, the Internet’s “phonebook” – the Domain Server Names – has at times seen certain DNS names being impounded by US authorities. All of which means the outreach and impact of US court rulings isn’t limited solely to the activities of US companies.