Latest Space Missions (& Other Science Stuff)

Starship - Flight Test Recap


Aliens :alien: :alien: :alien: !

https://asignin.space/the-message/

Video from a few days ago:

3 Likes

Shenzhou 16

Video Source :link:

Shenzhou 16, the 5th manned flight to the Tiangong Space Station with the a Long March 2F rocket is scheduled for Tuesday, May 30th 2023.

3 Likes
3 Likes

The break-through in language production was impressive, albeit parrot-like.
It understands categories but not individuals.

  • E.g. it knows it’s expected to ascribe “books” to “authors” – but it includes arbitrary books…
  • It can correctly explain how to calculate something in general, and when asked to apply its own solution to a situation, the results are random numbers …
  • It can write plausible syntactically correct source code, but doesn’t care whether the commands actually exists in that language…
  • Teachers input students’ essays and it states it wrote that. Students input the school’s letter and it states it wrote that as well…

:leftwards_hand: :exploding_head: :rightwards_hand:

I wonder if the inability to use concrete data (instead of deriving generic statements) is inherent in the method or whether they could improve that? Maybe they need two components that work together?

4 Likes

It’s dealing with totally abstract logical constructs. It has little or no understanding of cause and effect. It has never related the symbols it manipulates to real-world objects and situations. It needs a heavy dose of empiricism.

Give it some video cameras and manipulator arms. Let it play.

4 Likes

I’m not sure how far it can be improved with a pure unidirectional transformer model. It was developed to produce language, not be a general AI. It just so happens that langauge can be applied to a hell of a lot of problems that cannot necessarily be solved by it.
They could probably improve it some, but they’d have to train it sepcifically. And training the abstraction below the language (i.e. how words relate to reality rather than just one another) takes a lot more manual effort and would probably take a long time…

In the meantime, I’m glad chatGPT is doing really well in all kinds of exams, but really, really sucks when actually trying to do the job. If we get very lucky, it’ll kick our educational institutions into gear to finally come up with training and tests that are actually relevant to the job you’re going to do.

5 Likes

Right, you hit the nail on the head. I was phrasing it clumsily.

Looking forward to that!

From an academic point of view I agree, even though it’s also the first step towards speech activated household killer robots. Boston Dynamics is training for physical understanding, but not for language skills, right?

4 Likes

Language is currently too expensive to process on a mobile platform. You could make an interface to an API, but then you’re facing an issue: Your language processor would be completely decoupled from the rest of your platforms intelligence, i.e. translation from one to the other will be difficult and potentially very error prone. It’s the equivalent of telling one guy what you want, and he’ll be telling a different guy that speaks a different language what he’s supposed to do. Which, as all software developers know, works just splendidly… :roll_eyes:
Also, in this case both guys are relatively dumb.

Boston dynamics are still more concerned with body mechanics than with autonomous problem solving. They build drones, not AIs on legs. Most of their onboard processing power is used to just move about efficiently without falling over under various circumstances (which, by the way, our brains also expends a whole lot of effort on. Don’t forget that only about 10% of it are dedicated to abstract thought, the rest is just regulating the gritty mechanics like not falling over, delivering sensory input, actuating our muscles aso).
They are able to put in some intelligence, like finding paths, opening doors, traversing obstacles and sometimes also simple job specific thing like pulling levers etc. Stuff you want your drone to be capable of doing to reduce the input load while operating it. But they do not really make decisions on their own.

5 Likes

This actually was (in some cases, still is) a known evolutionary pathway. Many dinosaurs are believed to have had more than one ‘brain’, each dealing with different information, and (possibly) in different ways.

The feature still exists in some species - octopus, and cockroach, for example, but it seems to have been much more common in the distant past.

In the human brain, different areas serve different functions, and, whilst they communicate, they are also able to function autonamously (you keep on breathing when you’re asleep, for example).

Parallel processing of different kinds of information by different processors is complex, yes. But it’s by no means unusual.

5 Likes

Perhaps relevant: If you touch something hot the ‘reflex’ that jerks your hand away processes faster than the signal through the heat detecting nerves can actually reach your brain. Genuine pain is processed by direct connections between the sensory receptors and nerves that trigger muscle response. Point being that “commands” of sufficient priority need to be distinguished and processed at the ‘extremity’ level of your network, even if that extremity level consists of relatively simple components.

There is some functionality that becomes inexplicable if you try to differentiate between “brain” and “nervous system.” We have “a brain” because our nerve tissue is lumped up in one place for convenience of defense and as a relatively small creature we can afford for almost all processes to involve ‘there and back’ transmission delay. Dinosaurs’ nervous systems formed multiple ‘lumps’ because at their size they had functions that couldn’t afford the transmission time.

4 Likes

Hello, Skynet…

3 Likes
3 Likes

That is such a god-damned rookie mistake, though. About every thought experiment ever conducted on general AI behaviour predicted that exactly this was going to happen, and multiple potential solutions have been proposed to the problem, and they just go ahead and… do none of that? Yeah, that bodes really well for the future, great job! :man_facepalming:

3 Likes

I’m a bit confused about the denials - surely it’s a good thing that they’re testing in simulation?

I spent 25 years as an investigator. Nothing raises my suspicions so much as looking at a company’s accident and injury records, and finding nothing there. Nothing says ‘incompetent’ so much as the engineering manager who tells you ‘These machines never malfunction’.

If people tell you nothing ever goes wrong, they’re either lying, or they’re not looking.

3 Likes

This seemed odd, and I went int othe original articles a bit. Here’s a statement by the man hinself:

in communication with AEROSPACE - Col Hamilton admits he “mis-spoke” in his presentation at the Royal Aeronautical Society FCAS Summit and the ‘rogue AI drone simulation’ was a hypothetical “thought experiment” from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: “We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome”.

This does make sense, though. He might have embelished a traditional thought experiment to make it a bit more engaging, which would be a perfect explanation for why the example sounds pretty much exactly like from the AI safety 101. It’s because it is.

The alternative is that we have a military incompetent enough to just ignore all the work done up to this point, and a PR person incompetent enough to think that denying this thing would be a great idea if it actually happened. Not implausible incompetence, but juuuuust enough to make me lean towards “Spkesman decided to spice up his presentation a bit”.

3 Likes

This is a Time lapse video captured on Wednesday night.

Reflection or moon? Trippy eh
This next video about Mars has me question what was seen earlier on a basic digital video camera.

2 Likes

Open.ai stole training data and made a web page that sells hundreds “X in the style of artist Y” images for pennies, and they don’t pay commission to the sources for the percentage they contributed. Any large company could have created such an AI in the last few years, they just didn’t dare sell it as a product because it’s clearly on the dark end of the legal grey area. It’s about who sets the legal precedence first and makes money first. Other companies (who maybe tried to find a way to pay the artists fairly) are now commercially punished.

Now, Cyan can make the choice to only generate non-contemporary Art Nouveau and generic text snippets and hope that the training materials used in the output were 98% free to use, but not every company uses it “ethically” like that.

3 Likes

I’d like to make an Gedankenexperiment. Say, the invention of the car made horseshoe makers redundant. We consider this a normal step in development of technology, 99% of horseshoe makers had to retrain to be car mechanics.

A car is made by combining new parts created with knowledge from different technology domains such as metallurgy, rubber, mechanics, physics, chemistry. A car factory does not produce “a million horseshoes for pennies”, it had an added value. Even the horseshoe makers could agree that cars were better than horses.

If 99% of artists and writers happily retrained to use ChatGPT/DALL-E instead, then fewer people would produce art and text to train GPT with, and the neural network would stagnate.

It’s not the same step of technology evolution yet. It directly depends on the people that it’s putting out of a job, that’s what puts people off. Thoughts?

2 Likes

If AI want to make cars and the parts that go with them, I am all in. In fact, robots already help make cars. If interest in AI created “art” becomes more interesting to people than art inspired by human brains filled with thought and emotion, count me out. I fear the thought.

2 Likes

Again, powerful to be used as a tool in creation, the preliminary steps of workshopping and experimentation. Anyone who turns in something fully generated from an AI prompt will be called out for the hack they are. But hey I still get people from China selling my photos that I took with a real camera on cups and t shirts and that shits just gonna happen. Frankly I’d prefer capitalism and money just fuck right off and then guess what we can just make shit for the sake of making shit.

And an artist who has a problem with an AI trained off copyrighted material should turn off their own brain, we are all inspired by everything around us and we pour it out in a subconscious way that most of the time we don’t know where we are pulling it from. Quantitive research is incredibly important and I’m proud to know my images were used to create and expand the field of AI image generation.

Artists losing money over this? No, not at all. It’s just ego.

You have two paintings. Identical. One’s real and one’s fake. You know which one is fake. You can take one home. You’ll go with the real one. But hey sometimes someone might buy a bootleg knock off of someones work. Humans gonna human and grift. If you’re an artist making art in a capitalist world then you gotta accept other people not connected to your work are going to exploit it one way or another.

I understand your comparison to tech advancements putting workers out, (scribes>letterpress> computers) but this isn’t the same thing. Not yet anyway. Thats the thing, it’s all just a fun magic trick of denoising and brute forcing. There is zero intelligence. It’s expediating an artists workflow, but it is not replacing anybody. Anyone who has done that, fired workers bacause chatgpt or StableDiffusion, is a moron who probably sold half their IP so they could invest in NFT’s and drank every drop of silly juice the Ai tech bro was selling them. Because the thing I’ve noticed, the executives who are selling the tech are making it sound better and/or scarier than it actually is right now, and the Boomer CEO’s who have to get Karen from HR to find where the on switch is, drink those lies up cos they just heard they get to fire half their art and writing department.

I’m not happy with some of the names tied to openai but I am an open source proponent and I am very happy that AI is having an open source approach, so those in low income families or impoverished areas have a chance to access, learn and grow from freely shared knowledge. But to call it stealing, again this is that communist/socialist in me, feels wrong to me. I’m more concerned by people writing python programs to sift through websites for personal information that build databses on people to be sold to marketers, than I am a program coded to scour the internet for as many images as possible to train an AI to create mock ups on request. It almost feels like powerful people are trying to reissue those old fears so they can try get powerful legislation over control of the internet again.

I dunno where I’m going with this haha Woke up with a flu and I’ve typed too much to just delete it all now… Em. It reminds me of a song I havent thought about in years. A novelty act that played shows locally and had a cult following. Their first album was 5 minutes long and had 30 songs (and 500 titles, you ticked the one you wanted to be the album name on the cover).

Okay its been wiped from the internet, but one of the songs only lyrics are “big robots can’t paint very well. Big robots can’t paint very well” and it’s all I’ve been able to hear when AI art is brought up XD

edit: Wow sorry for the cursing, and the all of that.

3 Likes