来源
视频:https://www.bilibili.com/video/BV1zo4y1x7LL
油管:https://www.youtube.com/watch?v=qpoRO378qRY&t=1884s&pp=ygUPZ2...
原话摘抄
Very recently, I've changed my mind a lot about the relationship between the brain and the kind of digital intellegence we're developing. I used to think that the computer models we were developing weren't as good as the brain. Over the last few months, I've changed my mind completely. And I think probably the computer models are working in a rather different way from the brain. They are using backpropagation and I think the brain probably not. A couple of things have led me to that conculsion(个人认为是指从谷歌离职).
...
If you look at these large language models, they have about trillion connections and things like GPT-4, know much more than we do. They have sort of common sense knowledge about everything. They know probably 100 times more things than we do. They've got trillion connections but we got 100 trillion connections. So they are much much better at getting a lot of knowledge into only 1 trillion connections than we are. And I think it's because backpropagation may be a much better learning algorithm than what we've got. That's scary.
...
It (backpropagation) can pack more information into only a fewe connections, which we define trillion as only a few.
...
(An example that we should be scary of the computer models.) A computer is digital, which involves very high energy costs and very careful fabrication. You can have many same models running on different hardware that do exactly the same thing. Suppose you have 10,000 copies. They can be looking at 10,000 different subsets of the data. Whenever one of them learns anything, all others knows it. One of them figures out how to change the weight, so it knows it can deal with this data, they can all communicate with each other, and they all agree to change the weight by the average of what all of them want. And now, the 10,000 things are communicating very effectively with each other. So they can see 10,000 times as much data as one agent could. People can't do that. If I learn a whole lot of stuff about quantum mechanics, and I want you to know all that stuff, it's a long painful process of getting you to understand it. I can't just copu my weights into your brain.
...
主持人:When I have uncomfortable feeling I just close my laptop.
Yes. But if they're much smater than us, they'll be very good at manipulating us. You won't realize what's going on. You'll be like 2 years old. You'll be that easy to manipulate. So even if they can't directly pull levers, they can certainly get us to pull levers. It turns out if you can manipulate people, you can invade a building in washington without ever going there yourself.
...
(主持人)In some sense, talk is cheap. If we then don't have actions, what do we do?
I wish it was like climate chage, where you could say, if you got half a brain, you'd stop burning carbon. It's clear what we should do about it. It's clear that's painful, but has to be done. ... I don't thing there's much chance of stopping the development (of AI). What we want is some way of making sure that even if they're smarter than us, they're gonna do things that are beneficial for us. That's called the alignment problem. But we need to do that in a world where there are bad actors who want to build robot soldiers that kill people. And it seems very hard to me. So I'm sorry. I'm sending the alarm and saying we have to worry about this. And I wish I had a nice simple solution I could push, but I don't. But I think it's very important that people get together and think hard about it and see whether there's a solution. It's not clear there is a solution.
...
Stopping them ( AI developing) might be a rational way to do, but there's no way it's gonna happen. One reason if the US stops developing and the Chinese won't. They're gonna be used in weapons(They指AI tech). Just for that reason alone, the governments aren't gonna stop developing them. Google brings transformers and diffusion models. And it didn't put them out there for people to use and abuse. Microsoft decided to put it out there. Google didn't have really much choice. If you're gonna live in a captitalist system, you can't stop google from competing with microsoft. So I didn't think google did anything wrong. But I think it's just inevitable in a capitalist system or a system with competition between contries like the US and China that this stuff will be developed(this 指AI tech). If we allowed it to take over, it will be bad for all of us. I wish we could get the US and China to agree like we could with nuclear weapons. We are all in the same boat with respect to the exitential threat.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。