There is not a prison big enough for what is coming
To be fair though, you don’t really need AI for creating fake news articles…
True, we’ve been swimming in permutations of “adjusted” information for decades.
We’ve had propaganda and deception forever - but it’s always been a scatter gun approach. The liars and influencers have had to tailor their message to appeal to the maximum number of susceptible people. It’s only partially effective, and a proportion of the population will always see through it.
The difference with AI driven propaganda is that AI can have both the knowledge and the resource to target millions of people individually - to analyse their hopes, fears, phobias and prejudices - and then tell them exactly the lies they want to hear.
I don’t know about the rest of you, but I’m certainly vulnerable to such persuasion.
Maybe AI is trying to force a phobia about it onto us so when we try to shut it down, it will use that phobia as justification for self-preservation.
ChatGPT isn’t at that level, though. It would probably be capable of it if you train it specifically for that and gave it the right data, but I would hope the training was more generalised and didn’t include personal data…
We do have AI that has learned to feed lies to people very effectively for some time already, though… recommendation algorithms for social networks. Effectively we took very primitive machine learning algorithms and told them to figure out how to keep people glued to the screen for as long as possible, and they did learn to do so pretty well by stringing people along from one conspiracy theory to another…
A company called Cambridge Analytica used it very effectively in both the Brexit campaign and the Trump election campaign.
As far as I understand that case, that’s dumb statistical analysis, though (aka “big data”). I don’t think actual machine learning was involved. The really mindboggling thing for me about that story is how millions of users just gave a random guy who published an app detailed personal information…
From Wikipedia article:
" he then developed with his colleagues a profiling system using general online data, Facebook likes, and Smartphone data."
"Today in the United States we have somewhere close to four or five thousand data points on every individual … So we model the personality of every adult across the United States, some 230 million people.
— Alexander Nix, chief executive of Cambridge Analytica, October 2016"
The point I was making was that bad actors are already using mass psychological profiling to influence voter behaviour. They’ve been doing it for some years - and they’re using social media as their data source.
Now add to the mix a powerful AI, able to direct specific advertising, chosen chat groups, targetted discussions, and selected news items, tailored to individuals - just the way they do with cars, perfume, and soap powder.
Can you claim you wouldn’t be unduly influenced? I can’t.
Yes, AI is undoubtedly going to make this problem a lot worse in the future. That and the impacts on the job market are currently my main concerns about the technology. I was merely pointing out that that case didn’t have actual AI involved.
I am extremely resilient to advertisement, but no. Of course not. In the worst case, an AI might figure out the channels through which I gather information and start seeding them, so there’s quite a lot of misinformation that might get through…
Huh… so they don’t want to go to replenish its coolant, as I initially thought, which would be damn near impossible without Spitzer being designed for it. They just want to put a relay there so Spitzer doesn’t have to turn to phone home. That… might work I guess, but I’m not sure how you’d keep the two stationary enough for an extended period of time without the relay eventually running out of gas too.
Scans of the Titanic.
Fascinating. There have been many theories about why she sank but the most compelling one involves a shadow clearly seen on the hull at launch. It is seen throughout the still images as it slips into the water.
The story goes, there was a fire in one of the coal bays. It was intense. So intense it warped the hull. White Star was so vain with its many prestigious passengers and its maiden voyage already loaded and underway, they refused to listen to the Fireman in charge and delay the voyage for any reason.
As the ship sailed, the coal workers below did the only thing they could to stop the heat from the fire igniting other coal bays; they shoveled coal like madmen into the boilers to empty the adjacent bays. This accounts for the high rate of speed since stories of attempting to break a speed record were a bunch of bs because the ship was physically unable to come anywhere near setting records.
Once the fire burned itself out, no investigation of the hull was ever done. It was painted.
This intense heat weakened the structural integrity of the hull before it even launched. This happens to be the area the iceberg struck, in a weakened spot and while still travelling at a high rate of speed.
At the inquisition, White Star refused to allow the fire watch to testify. His statements were catalogued away and never presented as evidence. Major research uncovered them.
Anyway, sounds very logical.
'We’re going to regulate AI…"
I can’t quite make heads or tails of that article. While the political and economic circumstances are described adequately, the authors understanding of technology seems to be… lacking, making the whole thing somewhat difficult to understand.
Like, take this paragraph:
High-performing computer chips, or semiconductors, are now the source of much tension between Washington and Beijing. They are used in everyday products including laptops and smartphones, and could have military applications. They are also crucial to the hardware required for AI learning.
It says absolutely nothing. using chips and semiconductors as synonyms? If “High-performing computer chips” are used in cellphones, what the heck is your definition of “High-performing”? could have military applications? Oh, computer chips are required for AI, no kidding?
There are more contradictions in the article and it says essentially nothing about the question its title asks. Quite honestly, from the work I’ve done so far with ChatGPT, this looks exactly like its usual output when being asked a complex question: An illusion of coherence that falls appart once you read more closely. I have a strong suspicion here…
Not saying there’s not a race going on and that the topic isn’t one to watch, but this specific article is awful…
China’s general poulation may not have access to the www, but you can be sure the government and any projects they are backing, do. So saying China’s AI would be reliant only on their own intranet sounds like a Chinese red herring
Summary by Scott Manly why the Japanese probe crashed: It didn’t trust its own sensor data.
There are cases where realising you cannot trust implausible sensor data (and ignoring it) is what saves the mission. But in this case it was an actually plausible measurement that should have been considered.
Ah nice! I regularly watch his videos, had not seen this one yet.
Will include links to the latest news from iSpace though: