Generally Pretty Tired 4

Around the Web
Issue No. 019

ProfitAI, generating disinformation, Okra against microplastics, and filming the speed of light.

Welcome to Around the Web. The newsletter for hibernation and soap bubbles.

I had grand plans for the first anniversary of this newsletter in February. But, as you’ve well noticed, nothing happened. Why? Because I’ve been too tired, and my brain essentially entered a phase of awake hibernation. I managed to get my work done, but nothing else.

But it's spring, I’ve been on vacation and the headlines are still headlines and underneath there was no beach, but rubble. Let’s look into it.

This ain’t intelligence

Before I start, I want to highlight one link which is broadly applicable. Baldur Bjarnason thankfully compiled a great list of tips and tricks to assess AI research and dissect between information and public relations.

Secondly, I highly recommend one article I’ve read about language models and image generators that explained why the outputs of these models is much like a blurry JPEG of all the information it was trained on: ChatGPT Is a Blurry JPEG of the Web. These models consumed more or less the whole internet, when prompting a response they try to recreate this information – sometimes it works well, sometimes it’s distorted beyond recognition.

Now, the news.

OpenAI still tests their models on the public. Which is an interesting idea, but also very wrong. We had not coped with ChatGPT, as Microsoft added some GPT to Bing, essentially turning their search engine into a bullshit spewing hate machine. Hot on the heels of this, OpenAI «published» GPT-4. Why «published»? Well, in essence, OpenAI published nothing.

While being a bit more cautious in their announcements, they did promise some things. Among them, that GTP-4 is better than previous versions to prevent the generation of misinformation. This appears to be false. News Guard tested some prompts against ChatGPT 3.5 and ChatGPT 4. ChatGPT 3.5 did block 20 of 100 prompts, whereas version 4 happily generated text for all prompts. 3.5 added more addendums that the generation contains falsehoods than 4, too.

For some, ChatGPT is still too woke. Conservatives Aim to Build a Chatbot of Their Own.

The version bump is also affecting research. Codex, an API for researchers, was shut down with just three days notice, asking researchers to move to ChatGPT. Essentially, this makes it impossible to reproduce any research done using the Codex API. The former research lab is now firmly a for-profit hype vendor.

Amazon’s lawyers, meanwhile, are begging its employees to stop using ChatGPT, as they have discovered out «closely resembling» internal company data.

On Monday, ChatGPT was briefly shut down, as the tool showed the prompt history of other users when using the tool. In a blog post, OpenAI acknowledged that some payment information has been shown to wrong users, too. They blamed a bug in Python’s Redis package.

Is this the defining headline of all I ever talk about? Fake ChatGPT Chrome Extension Hijacking Facebook Accounts for Malicious Advertising.

Enough with ProfitAI for now.

Fakes, fakers, and facts

The new machine learning tools have made it easier than ever to generate content, as we’ve seen above, with no regards to truth. Some years ago, deep fakes were largely a theoretical problem. «As of 2018, according to one study, fewer than 10,000 deepfakes had been detected online. Today the number of deepfakes online is almost certainly in the millions.» (The Deepfake Dangers Ahead)

The more mediums are involved in fabricating these falsehoods, the harder they become to recognise. Deepfakes capitalise on this, and they are getting better. Speech synthesis, as an example, has made rapid progress over the last few years.

Not to mention, what might happen if the subjects of the fakes post the fakes themselves. Say, for example, an ex-president of the United States of America posts a faked photo of himself? Oh: Donald Trump Shares Fake AI-Created Image Of Himself On Truth Social.

Excuse me, but I’ll mention Trump again. Last weekend, he posted that he will be arrested on Tuesday. This didn't happen. But Bellingcat founder Eliot Higgins took the opportunity to let Midjourney imagine what it might look like.

The pictures certainly aren’t close to real if you look closely. But in a media environment where nobody looks closely, as we scroll through a stream of information, they are good enough to sow doubt and disbelief.

Midjourney promptly suspended Higgins’ account.

Which images can you trust, which stories believe when enough of what you read is a fabrication? And what if one system cites the bogus output of another, as it already happened with Bing and Bard? Or when more and more journalists flock to chatbots to get their articles off the ground and don’t check every single sentence (out of laziness, time pressure or bad will) if these are correct? The Grayzone published an article trying to claim that the documentary Navalny contains misinformation, the article was based on a conversation with Chat Sonic, a ChatGPT alternative. And so, it cited misinformation to claim misinformation.

And yes, the solution to all of this is media literacy, but how do we train this? We can look to Finland, where this is taught in school. But somehow I don’t see this happening in the rest of the world.

It’s truly a shame, since all these advancements in technology could also do good, such as making sense of our universe.

Social Mediargh

In Germany, content moderators working on Facebook’s and TikTok’s products began to organise to demand better conditions for the often gruesome work.

What they don’t moderate are ads. Seemingly, they are getting worse (only proving that whatever you think is the worst, is only a glimpse of the possibilities of bad).

But advertising experts agree that crummy ads — some just irritating, others malicious — appear to be proliferating. They point to a variety of potential causes: internal turmoil at tech companies, weak content moderation and higher-tier advertisers exploring alternatives. In addition, privacy changes by Apple and other tech companies have affected the availability of users’ data and advertisers’ ability to track it to better tailor their ads.

Meanwhile, we've seen the end of free speech on Twitter. It’s now against the terms of service to wish harm to other users. In India, Twitter has decided that fighting the demands of oppressive regimes is too time-consuming and is now blocking accounts on the request of the government. We’ve come a long way from «I’ll allow all speech». Antisemitism might still be fine.

Twitter, always a fan of announcing, announced that it will disable checkmarks for legacy verified users on April, 1st. Yes. LOL. Will they do it? Who knows? What does it mean? What does anything mean today? Anyway. If they did it, every person stupid enough to pay Musk will be visible at a glance. Great. Ryan Broderick has been kind enough to summarise the whole farce in Garbage Day:

Elon Musk and an army of the tech industry’s biggest reactionary dorks literally bought and took over Twitter after years of being both obsessed with it and also completely consumed with resentment over “the liberal establishment’s” perceived importance on the app. They were furious that they did not also get the same little blue checkmark that 22-year-old viral news reporters were given so they could protect themselves from impersonators and mute some of the death threats they get on a daily basis. And so these giant losers built a new way to pay for a blue checkmark so they could pretend like they were just as important as they assumed the verified users believed themselves to be. And they expected everyone else to eventually pay to keep their checkmarks. No one has, of course, but Twitter is still moving forward with this. But they seem to realize that if they do that all it’ll do is make Musk’s try-hard fanboys immediately identifiable on the app. So now they’re building a way to hide how lame they will look alone on the site with their paid checkmarks.

Elsewhere in free speech, TikTok is under threat to be banned in the USA.

Can I talk to you about e-mails?

Thanks for making it this far. Maybe you are interested in getting Around the Web as an e-mail whenever a new issue is published?

There’s also an RSS feed. It’s like e-mail, but better (imho).

Powered by Buttondown.

EOL of humanity

The comprehensive review of human knowledge of the climate crisis took hundreds of scientists eight years to compile and runs to thousands of pages, but boiled down to one message: act now, or it will be too late.

Scientists deliver ‘final warning’ on climate crisis: act now or it’s too late

It’s incredibly important to switch the surrounding discourse of this to the present tense. The thing we called «normal» is gone.

The map shows that per- and polyfluoroalkyl substances (PFAS), a family of about 10,000 chemicals valued for their non-stick and detergent properties, have made their way into water, soils and sediments from a wide range of consumer products, firefighting foams, waste and industrial processes.

Revealed: scale of ‘forever chemical’ pollution across UK and Europe

Open Mind, thankfully, wrote a fair bit about climate footprint calculators and their role in blaming individual behaviour for the climate crisis. We can’t individual ourselves out of it – which, on the flip side, does not resolve us from changing our individual behaviours.

BP was correct that carbon calculators can be useful. And individual responsibility has a place. But BP hijacked legitimate scientific research and weaponized it to serve the company’s purposes by blaming us instead of itself. While this sounds pretty bad, there is some good news: You can take the science back and use it for the change it was intended to make.

The vegans were right, plants are (part of) the answer: Texas Researchers Use Okra to Remove Microplastics from Wastewater.

Tante reflects on Metaverses and why they might never come to be.

The Metaverse never came to pass not because of lacking tech but because of tech that worked massively well: The Internet has been so useful that it now is part of the real world. And the Metaverse idea only makes sense in a world where that didn’t happen.

While I was away, the US of A shot down several balloons. Aliens? China? Maybe just hobbyists.

Shot: Implicit bias training for cops will surely prevent them from killing people. Chaser:

Although the training was linked to higher knowledge for at least 1 month, it was ineffective at durably increasing concerns or strategy use. These findings suggest that diversity trainings as they are currently practiced are unlikely to change police behavior.

Lai, Lisnek – The Impact of Implicit-Bias-Oriented Diversity Training on Police Officers’ Beliefs, Motivations, and Actions

Did you know that you can film the speed of light? Me neither. But you can.


That’s it for this issue. As always, thanks for reading and if you have a friend who might enjoy reading it too, subscribing is free, free like a bird.

Stay sane, hug your friends, and be kind to the skeleton within you.

Collections