Starship - Flight Test Recap
Aliens
!
https://asignin.space/the-message/
Video from a few days ago:
Shenzhou 16, the 5th manned flight to the Tiangong Space Station with the a Long March 2F rocket is scheduled for Tuesday, May 30th 2023.
The break-through in language production was impressive, albeit parrot-like.
It understands categories but not individuals.
I wonder if the inability to use concrete data (instead of deriving generic statements) is inherent in the method or whether they could improve that? Maybe they need two components that work together?
Itâs dealing with totally abstract logical constructs. It has little or no understanding of cause and effect. It has never related the symbols it manipulates to real-world objects and situations. It needs a heavy dose of empiricism.
Give it some video cameras and manipulator arms. Let it play.
Iâm not sure how far it can be improved with a pure unidirectional transformer model. It was developed to produce language, not be a general AI. It just so happens that langauge can be applied to a hell of a lot of problems that cannot necessarily be solved by it.
They could probably improve it some, but theyâd have to train it sepcifically. And training the abstraction below the language (i.e. how words relate to reality rather than just one another) takes a lot more manual effort and would probably take a long timeâŚ
In the meantime, Iâm glad chatGPT is doing really well in all kinds of exams, but really, really sucks when actually trying to do the job. If we get very lucky, itâll kick our educational institutions into gear to finally come up with training and tests that are actually relevant to the job youâre going to do.
Right, you hit the nail on the head. I was phrasing it clumsily.
Looking forward to that!
From an academic point of view I agree, even though itâs also the first step towards speech activated household killer robots. Boston Dynamics is training for physical understanding, but not for language skills, right?
Language is currently too expensive to process on a mobile platform. You could make an interface to an API, but then youâre facing an issue: Your language processor would be completely decoupled from the rest of your platforms intelligence, i.e. translation from one to the other will be difficult and potentially very error prone. Itâs the equivalent of telling one guy what you want, and heâll be telling a different guy that speaks a different language what heâs supposed to do. Which, as all software developers know, works just splendidlyâŚ
Also, in this case both guys are relatively dumb.
Boston dynamics are still more concerned with body mechanics than with autonomous problem solving. They build drones, not AIs on legs. Most of their onboard processing power is used to just move about efficiently without falling over under various circumstances (which, by the way, our brains also expends a whole lot of effort on. Donât forget that only about 10% of it are dedicated to abstract thought, the rest is just regulating the gritty mechanics like not falling over, delivering sensory input, actuating our muscles aso).
They are able to put in some intelligence, like finding paths, opening doors, traversing obstacles and sometimes also simple job specific thing like pulling levers etc. Stuff you want your drone to be capable of doing to reduce the input load while operating it. But they do not really make decisions on their own.
This actually was (in some cases, still is) a known evolutionary pathway. Many dinosaurs are believed to have had more than one âbrainâ, each dealing with different information, and (possibly) in different ways.
The feature still exists in some species - octopus, and cockroach, for example, but it seems to have been much more common in the distant past.
In the human brain, different areas serve different functions, and, whilst they communicate, they are also able to function autonamously (you keep on breathing when youâre asleep, for example).
Parallel processing of different kinds of information by different processors is complex, yes. But itâs by no means unusual.
Perhaps relevant: If you touch something hot the âreflexâ that jerks your hand away processes faster than the signal through the heat detecting nerves can actually reach your brain. Genuine pain is processed by direct connections between the sensory receptors and nerves that trigger muscle response. Point being that âcommandsâ of sufficient priority need to be distinguished and processed at the âextremityâ level of your network, even if that extremity level consists of relatively simple components.
There is some functionality that becomes inexplicable if you try to differentiate between âbrainâ and ânervous system.â We have âa brainâ because our nerve tissue is lumped up in one place for convenience of defense and as a relatively small creature we can afford for almost all processes to involve âthere and backâ transmission delay. Dinosaursâ nervous systems formed multiple âlumpsâ because at their size they had functions that couldnât afford the transmission time.
Hello, SkynetâŚ
That is such a god-damned rookie mistake, though. About every thought experiment ever conducted on general AI behaviour predicted that exactly this was going to happen, and multiple potential solutions have been proposed to the problem, and they just go ahead and⌠do none of that? Yeah, that bodes really well for the future, great job!
Iâm a bit confused about the denials - surely itâs a good thing that theyâre testing in simulation?
I spent 25 years as an investigator. Nothing raises my suspicions so much as looking at a companyâs accident and injury records, and finding nothing there. Nothing says âincompetentâ so much as the engineering manager who tells you âThese machines never malfunctionâ.
If people tell you nothing ever goes wrong, theyâre either lying, or theyâre not looking.
This seemed odd, and I went int othe original articles a bit. Hereâs a statement by the man hinself:
in communication with AEROSPACE - Col Hamilton admits he âmis-spokeâ in his presentation at the Royal Aeronautical Society FCAS Summit and the ârogue AI drone simulationâ was a hypothetical âthought experimentâ from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: âWeâve never run that experiment, nor would we need to in order to realise that this is a plausible outcomeâ.
This does make sense, though. He might have embelished a traditional thought experiment to make it a bit more engaging, which would be a perfect explanation for why the example sounds pretty much exactly like from the AI safety 101. Itâs because it is.
The alternative is that we have a military incompetent enough to just ignore all the work done up to this point, and a PR person incompetent enough to think that denying this thing would be a great idea if it actually happened. Not implausible incompetence, but juuuuust enough to make me lean towards âSpkesman decided to spice up his presentation a bitâ.
This is a Time lapse video captured on Wednesday night.
Reflection or moon? Trippy eh
This next video about Mars has me question what was seen earlier on a basic digital video camera.
Open.ai stole training data and made a web page that sells hundreds âX in the style of artist Yâ images for pennies, and they donât pay commission to the sources for the percentage they contributed. Any large company could have created such an AI in the last few years, they just didnât dare sell it as a product because itâs clearly on the dark end of the legal grey area. Itâs about who sets the legal precedence first and makes money first. Other companies (who maybe tried to find a way to pay the artists fairly) are now commercially punished.
Now, Cyan can make the choice to only generate non-contemporary Art Nouveau and generic text snippets and hope that the training materials used in the output were 98% free to use, but not every company uses it âethicallyâ like that.
Iâd like to make an Gedankenexperiment. Say, the invention of the car made horseshoe makers redundant. We consider this a normal step in development of technology, 99% of horseshoe makers had to retrain to be car mechanics.
A car is made by combining new parts created with knowledge from different technology domains such as metallurgy, rubber, mechanics, physics, chemistry. A car factory does not produce âa million horseshoes for penniesâ, it had an added value. Even the horseshoe makers could agree that cars were better than horses.
If 99% of artists and writers happily retrained to use ChatGPT/DALL-E instead, then fewer people would produce art and text to train GPT with, and the neural network would stagnate.
Itâs not the same step of technology evolution yet. It directly depends on the people that itâs putting out of a job, thatâs what puts people off. Thoughts?
If AI want to make cars and the parts that go with them, I am all in. In fact, robots already help make cars. If interest in AI created âartâ becomes more interesting to people than art inspired by human brains filled with thought and emotion, count me out. I fear the thought.
Again, powerful to be used as a tool in creation, the preliminary steps of workshopping and experimentation. Anyone who turns in something fully generated from an AI prompt will be called out for the hack they are. But hey I still get people from China selling my photos that I took with a real camera on cups and t shirts and that shits just gonna happen. Frankly Iâd prefer capitalism and money just fuck right off and then guess what we can just make shit for the sake of making shit.
And an artist who has a problem with an AI trained off copyrighted material should turn off their own brain, we are all inspired by everything around us and we pour it out in a subconscious way that most of the time we donât know where we are pulling it from. Quantitive research is incredibly important and Iâm proud to know my images were used to create and expand the field of AI image generation.
Artists losing money over this? No, not at all. Itâs just ego.
You have two paintings. Identical. Oneâs real and oneâs fake. You know which one is fake. You can take one home. Youâll go with the real one. But hey sometimes someone might buy a bootleg knock off of someones work. Humans gonna human and grift. If youâre an artist making art in a capitalist world then you gotta accept other people not connected to your work are going to exploit it one way or another.
I understand your comparison to tech advancements putting workers out, (scribes>letterpress> computers) but this isnât the same thing. Not yet anyway. Thats the thing, itâs all just a fun magic trick of denoising and brute forcing. There is zero intelligence. Itâs expediating an artists workflow, but it is not replacing anybody. Anyone who has done that, fired workers bacause chatgpt or StableDiffusion, is a moron who probably sold half their IP so they could invest in NFTâs and drank every drop of silly juice the Ai tech bro was selling them. Because the thing Iâve noticed, the executives who are selling the tech are making it sound better and/or scarier than it actually is right now, and the Boomer CEOâs who have to get Karen from HR to find where the on switch is, drink those lies up cos they just heard they get to fire half their art and writing department.
Iâm not happy with some of the names tied to openai but I am an open source proponent and I am very happy that AI is having an open source approach, so those in low income families or impoverished areas have a chance to access, learn and grow from freely shared knowledge. But to call it stealing, again this is that communist/socialist in me, feels wrong to me. Iâm more concerned by people writing python programs to sift through websites for personal information that build databses on people to be sold to marketers, than I am a program coded to scour the internet for as many images as possible to train an AI to create mock ups on request. It almost feels like powerful people are trying to reissue those old fears so they can try get powerful legislation over control of the internet again.
I dunno where Iâm going with this haha Woke up with a flu and Iâve typed too much to just delete it all now⌠Em. It reminds me of a song I havent thought about in years. A novelty act that played shows locally and had a cult following. Their first album was 5 minutes long and had 30 songs (and 500 titles, you ticked the one you wanted to be the album name on the cover).
Okay its been wiped from the internet, but one of the songs only lyrics are âbig robots canât paint very well. Big robots canât paint very wellâ and itâs all Iâve been able to hear when AI art is brought up XD
edit: Wow sorry for the cursing, and the all of that.