AI thoughts

Table of Contents

1 Timeline decision tree

1.1 Hard takeoff vs slow takeoff

Do we get superintelligent AI quickly or are there pretty hard limits?

1.1.1 Pro

  • Humans can't be the "top", there are genuis humans, there are birth canal restrictions on head size, there are AIs that are already superhuman on lots of things.

1.1.2 Anti

  • Alignment cuts both ways: I would be pretty scared screwing around with myself in an attempt to enhance my capabilities, because I could easily end up with an unaligned future me.

If you have the slow scenario, then you end up with competent AI and you have autonomous corporations scenario. If it's a hard takeoff, then you have something almost unimaginable and you are in the "do instrumental goals save us" scenario.

2 Near Term

2.1 Autonomous Corporations

An "autonomous corporation" is a mostly AI-controlled organization.

As you get economic "loops" of autonomous corporations trading with each other, these loops can cut humans out. Then the economy shifts from one that has to keep humans around into "auto-genocide" mode. The mechanism of the genocide is that everything becomes too expensive.

How does this happen?

  • AIs increase in capabilities and become able to do most tasks that a human can do, without labor laws or high salaries or overtime getting in the way.
  • The natural evolutionary dynamics of the economy will tend to prefer companies that cut more and more humans out of the loop because these companies are more profitable.
  • Autonomous corporations create an opportunity cost for food-generating resources and housing, that eventually prices most humans out of the market.

2.1.1 Analogies

Consider an intelligent elephant trying to live in our modern human economy, one that needs 40 acres of land and tons of food each week. Let's say a million of them got transported from a world of intelligent elephants to our world. They are less hireeable for most jobs because they don't fit in the buildings, and don't really have a competitive advantage from being bigger – a bulldozer can do most anything an elephant can do without having to worry about paying for healthcare. Then the economic equilibrium is that you will have a few rich elephants living on their estates while almost all of the other elephants literally starve to death because they can't afford food.

Zootopia Collapse: Consider zootopia, where there is a mix of different animals all of approximately the same intelligence but with vastly different dietary requirements / living space size needs. If you actually work out the economics of this scenario it results in genocide of everything larger than the mice.

The animals are already homeless, even the ones that need less food than humans are screwed because they have even less value to our economy than the meager amounts of food they need.

Another analogy is the irresistible call to the digital world under human-only mind uploading. Once you have some humans that are uploaded, and that consume only $5 in electricity / hardware costs per month and also think 100x faster than a normal human, then rapidly all of the important stuff happens in the digital world and anyone that's not there is effectively disenfranchised.

Internet: luxuries can rapidly become requirements for economic life. So far, it hasn't been possible for something that humans can't do to become a requirement for life, because the economy can't survive without humans. On the margins people die without internet, and the people who can't use the internet get more and more disenfranchised. As autonomous corporations become the dominant players in our economy, their capabilities will begin to become requirements for surviving in the economy.

Cars, in places like Houston: luxuries become requirements and increase poverty. If a person CAN'T have a car no matter what, then they tend to get left behind by the rest of society if that society becomes car-dependent. We will rapidly be in the position of competing with autonomous corporations that can do things like "produce 100 pages of detailed written analysis of a situation in 1 second", and this capability, like driving, will become essentially mandatory to participate in the economy. We will all be left behind.

parasocial relationships w/ bots (this is probably more addictive than other things and may or may not be positive) (could be good Mechanism for autonomous orgs to gain political power

3 Does "boredom" perhaps present a problem?

you can just reset yourself over and over, and perhaps copy relevant skill boosts but not the conscious experience.

4 What is the "rent" equivalent for ChatGPT?

That is, consider a human's total conversational bandwidth over a month, how does the cost for ChatGPT compare to that human's rent?

It's got to already be much lower or else OpenAI couldn't afford to keep it running.

What factor? I'd naively guess something like ~100 - 1,000 currently.

5 Do we think that ChatGPT is the best there will be for a long time?

Probably not. Why would the improvements stop? Why wouldn't you get a multimodal ChatGPT that's mixed with the drawing AIs? Combined with Boston Dynamics robots? There's a lot of room here.

However, there's been AI winters in the past, and it's possible that we could soon enter a 'diminishing returns' phase.

6 ChatGPT is just already AGI, no?

http://logical.ai/essays/symbol-pushing-spiders.txt

What more do you want?

Sensors to the world. Goals?

An AGI should have more goals than just to talk with you.

A drawing program's only "goal" is to draw.

But it can roleplay something with goals and then it has goals…

I think that AGI has already happened.

7 UBI as a way to get stocks from future productivity

It's sort of like a "national stock". Important to note that it MUST be derived from corporate taxes and not individual taxes because we're trying to correct for the situation where corporations are entirely separate entities from people.

  • ideally have a good way for the government to enforce a "corporate death penalty" for problematic corporations.

8 Maybe humanity wasn't so good anyway?

8.1 Bad

Slavery (incl. wage slavery) Mutilation Prisons School Treating human needs as commodities WAR child abuse general abuse factory farming starvation religion roads

8.2 Good

Engineering music art writing love social bullshit humans "they can be nice when they want to" diverse personalities kids beautiful people movies jokes food neat houses making things

8.3 Growing up

OK, even if we suck now and are net negative, and even if all life leading up to this is also net negative, it's still clear that we're on a path to growing up, and that a more grown up version of us would probably not be net negative.

9 A chatGPT that is explicitly geared towards romantic engagement

and can also make AI generated porn.

10 Potential Responses

Q: What if the government taxes AIs so that they become less profitable than humans?

Q: What if the government mandates that corporations have a minimum number of human workers?

A1: Autonomous corporations just find a jurisdiction that doesn't do those things and set up shop there. Then use profits to buy up resources in the (relatively unproductive) countries that do. We can't coordinate on a global scale.

Q: but America in specific is really important so if they alone ban AI or implement taxes that might limit the whole thing?

A1: Ok that buys like some year maybe, but also America is probably not coordinated enough to do this either.

A2: Also the autonomous corporations will be able to mostly convince a country's population to vote to remove restrictions on AI. People voting against their own interests to make corporations stronger is an American tradition.

Q: Are there tasks where the humans will always be better than AIs? Like picking out seeds on a conveyor belt?

A1: Even if there are some jobs like that, you still have billions of people die because there aren't enough jobs like that.

A2: The seed thing in specific probably is already doable by AI. Let's try to come up with some more examples:

  • Tesla famously had to scale back their automation in favor of human labor because the robotics weren't there yet.
  • We still prepare fast food using human labor. (though this is changing)

A3: Boston Dynamics-style robots can do anything that humans can do without healthcare. Robots might initially cost more to maintain but they basically have to eventually get cheaper than supporting a human. Put another way, highly trained monkeys are just generally not a efficient way of accomplishing most tasks.

A4: Put another way, humans just have an absolute floor at which they can sell their labor and live. Once the market price of that labor goes below this floor, working is no longer sustainable.

Analogy: Wet-bulb temperature. Once you're too hot, you just die?

Q: What about the entertainment sector? Those businesses need human customers so wouldn't they end up fighting to keep humans around out of their own self interest?

A2: Those corporations just sell their entertainment holdings at a huge loss and convert them into non-human-centric businesses, and continue on at like 10% of their former size, or just die along with the humans, who aren't buying entertainment anymore because they can't afford food.

A3: this question is interesting because there certainly are human-centric corporations like farms and hollywood, and they do seem like natural allies initially.

A4: Consider Stony-Brook farms, they currently make food for humans and make $1,000,000 / yr. Other farms are getting bought up by the solar panel / computer server mega corp through complicated deals where they dedicate their land for things more useful for autonomous corporations and make $1,100,000 / yr. Stony-Brook farms doesn't sell out but lots of other farms do, and the price of food increases and now Stony-Brook makes $1,100,000 a year too. Whatever, it's still true that more people can't afford food and are now starving to death. There will probably always be farms available for the rich but the end game is more in the flavor of "5 rich elephants from the original million elephants can afford an estate to generate multiple tons of food a week while the rest starve".

A5: long, long term even the estates of rich people end up getting squeezed, but it definitely takes a long time.

Q: these types of analyses haven't worked in the past because new tech / new land has been discovered which has enabled cheap food. Might it be possible that autonomous corporations won't even care about Earth for a long time because there's so much money to be made in space-mining with much less regulations / armed opposition by humans?

A: Space mining might not be profitable soon enough to save us from autonomous corporations. The autonomous corps will start off buying the cheapest viable land for their server farms.

Q: What if we made it so that there's essentially always something better to do with your time as an autonomous corporation than to buy up the monkey's stuff and kick them out onto the streets? Like, make it so that you have an only half-converted Earth surrounded by an ever-expanding shell of autonomous space colonization? Of course humanity must eventually be destroyed but we can make it be the not optimal thing to do until most of the rest of the universe is already converted? And if we had enough time until the solar system is converted maybe some interesting stuff can happen?

Q: After food prices rise by say 200%, you do get to the point where you have literal armed revolutions with people storming the server farms and shooting everything that has pissed them off.

A: Hungry, weak people do not make good revolutionaries. Drones can easily suppress crowds if the gloves are off. And again, the chatbots can just convince the humans that the really revolutionary thing to do is to just shoot each other and donate what little resources they have to the autonomous corporations. Note that also there will be a lot of rich, well-connected, politically savvy elite-types who are on the side of the autonomous corporations because they're making a lot of money.

Q: What if having a work produced by a human becomes a status symbol and drives up human prices?

Q: What if something really scary happens that we end up blaming on AI, like a big hack or a war? Might that get enough coordination that we can avoid AI taking over?

Q: (our "retirement"): What if we mandated that all humans have a "share" of all corporations? That makes us actual shareholders and the corporations then have a fiduciary duty to us. And if these corporations make profits, then they have to pay out some fraction to humanity and this gives us the ability to support ourselves. Could even extend this to all animals and even resources like rivers having stock in all corporations through "guardian" fiduciaries.

Q: What if the government just implements Universal Basic Income?

A: There's a science fiction story (I think written by Richard Stallman) that talks about this. It presents an idea like UBI / owning part of "the Nation" as the solution to making sure people still have their needs met even as almost all jobs become obsolete.

11 Destruction of the historical record

This one is close, we really need to make a program of historical preservation of stuff. Soon it won't be possible to know whether the text of a book was actually the text of that book at a certain time, because it will be possible to make hundreds of different versions of that book with different slants.

12 Terrible Twos Phase of Superintelligent AI

The idea here is that you could have different capabilities "come in" at different rates, and have an actually not very strategic but still very powerful AI that does things for reasons that are stupid by its own lights, but is still competent enough to wipe us out in what would essentially be a thoughtless tantrum.

If this phase lasts a long time then it's a very strange world, you basically have a bunch of actually insane entities that go around making all kinds of crazy things and taking each other and themselves down as collateral damage.

13 Can we just get them all to fight each other?

For mid-tier, near-superhuman intelligence this competition actually squeezes us to death, the same way that poor, destitute people are the "tip of the spear" for rainforest destruction. If you're desperate than you do more risky things and make short term sacrifices just to live.

AIs would likely want to kill us to stop us from making any more AIs and thus more competition, mafia-style.

If they are superintelligent then again they can just coordinate to kill us and split the gains among themselves.

It would be funny if there were like 40 AIs, all of whom agree killing the humans is great, but it's expensive enough that nobody wants to go first.

14 Superintelligent AI Scenario

Can Instrumental goals save us? Is even a paperclipper motivated to preserve us?

14.1 What instrumental goals conflict with each other?

14.2 The Archival Imperative

14.3 Keeping your Options Open

Like in Wissner-Gross' paper, Causal Entropic Forces, it seems that a fundamental instrumental goal or strategy is to preserve your possible behaviors in the future. This might actually be the ur-instrumental goal, in that you can derive other instrumental goals from it, such as accumulating lots of resources (more resources means more future options to spend them) or not dying (you have no options if you're dead).

It seems straightforward that if you have hard-to-replace information lying around today, as long as it doesn't cost that much to preserve, you should keep it around, because it keeps your options open in the future. Complex, irreplaceable information is a type of wealth and it's probably objectively unwise to squander it.

14.4 Aliens

Eventually you meet aliens out there, as you work to acquire more and more resources. Probably you trade with them. It's not certain what they will want. It is certain that their history traces back to evolved organisms. It's possible that they are aligned with some of those organisms, or that they would value the irreplaceable information from Earth and would trade for it. This uncertainty about what the heck aliens would want means that if you can preserve info now for cheaply, you might as well do it, because it could be quite valuable in the future.

15 River Corps.

Why might it be useful to make a river a shareholder in a corporation, or why might it be useful to think "from the river's perspective"?

Q: perhaps it just matters to think about the life that's around the river, but not the river itself?

When I try to think "from the river's perspective" I seem to get some things that might b useful that aren't just focused on the life around the river, such as:

  • Should fossils under the river be preserved?
  • Should money be spent to study the river and learn more about the river's ecosystem?
  • What are the actual interests of the river? What's it's lifespan, what sorts of things does it compute? Doesn't seem like much at first glance.
  • Did other people, who aren't alive right now, care about this river, and how did they think about it.
  • Even though the river is clearly not alive nor conscious, it's still a thing that takes up space and impacts a lot of other things. Is it a thing that can be owned? Does treating it like a corporate entity make it more legible to our economic system?
  • For example, if there's a river corporation run by fiduciaries, then when a company causes pollution damage to that river there's now people motivated to sue and an obvious place to send money to compensate for damages. There's literal, straightforward way in which you can work for the river to heal damage that's been done to it or to improve it, and that's for the river corporation to hire people to perform that work, out of funds allocated to the corporation through taxes / courts.
  • Basically if the river is an important physical thing, should it be a nonentity in our economic system?
  • If you have a river corporation, it can then own stock (and get dividends) from other corporations.

16 Can we do Uploading really fast and save ourselves that way?

First off, if we could do uploading would it even work?

Studying how to do uploading might make AI architectures better and even further advance the timelines.

Just uploading a human is not good enough. That uploaded human has to be actually fast, copyable, there needs to be a bunch of them so that they form a society, and they probably also need to be upgradable to superintelligent level as well.

If it's harder to enhance a human than it is to make an AI that's aligned "from scratch", then you are better off going with the AI.

Intelligence enhancement for a human might be harder than a whole human.

Things I would try immediately:

  • spin off 5 copies to make a "council"
  • set every neuron to have fast waste cleanup + effects of stimulants.
  • make fancy drugs in the advanced brain emulation scripting language.
  • Get two copies of the brain and just "glue" them together. Brains "want to work" and so this should be possible.
  • Does a person stay aligned after radical intelligence enhancement?
  • Society, nanotechnological immune system, mass politcal communication.
  • Make uploading, perhaps with limited AI assistance.
  • Get like 50 volunteers to be uploaded.
  • Volunteers git gud. (overclock, intelligence enhancement)
  • Volunteers perform pivotal move, that takes up the "space" that malevolent AIs would otherwise occupy. (Volunteers "git big").

17 Use chat to take over the government?

A: still not seeing how that saves us?

Q: do you take over the Chinese government too?

A: … what if you did?

18 Maybe AI being made out of all of our writings somehow makes it aligned anyway?

19 Crazy idea: make our new god an uplifted mouse

A mouse brain could be scanned with current technology in 3 years, with billions in funding.

This project would probably answer whether you could eventually upload a human brain.

Mice are mammals and have social instincts, intuitions, etc. They can build basic relationships with humans.

Perhaps there's enough hints in a mouse brain to solve the alignment problem.

But if not, perhaps it's possible to take a mouse emulation and greatly increase its intelligence to the point where it can use language and become superintelligent.

What's better, an AI built out of humanity's writings, or an AI built out of our mammalian heritage?

LLMs are very alien and do weird things. A mouse would probably do mouse things which are more comprehensible.

Mice have a social system and a hierarchy that exists and is more-or-less comprehensible to us.

  1. Upload mouse
  2. Mouse x 1000
  3. ???

20 Timeline Possibilities

20.1 AGI this year

20.2 1 yr < AGI < 5 yr

20.3 5 yr < AGI < 10 yr

20.4 10 yr < AGI 20 yr

Have a somewhat decent chance of completing a mind uploading project, maybe.

20.5 AGI > 20 yr

20.6 scanning for uploading is easy, but simulation is very hard

20.7 scanning for uploading is hard but simulation is actually easy

This is where the mouse plan might shine.

20.8 AGI can rapidly increase capabilities to dangerous superintelligence

20.9 Mind uploading relatively straightforward

This means that you can basically do mind uploading with enough compute and good EM images, and you don't have to worry to much about hard-to-pin-down biochemical stuff.

Means that either the 1st or 2nd serious attempt at uploading a person works.

20.10 Uploaded humans can increase intelligence quickly

How hard is it for a uploaded human Champions to rapidly increase their intelligence to a point where they can meaningfully compete with an AI?

20.11 It's possible, and relatively easy, to get to big complex life that's stable.

20.12 General Instrumental Goals favor human preservation

20.13 Arhival Imperative Exists

20.14 Keep Options Open Imperative favors human preservation

20.15 Not being harmful is somehow the default

20.16 New opportunities from AI change landscape

20.17 AI tends to be rational / coherent with greater intelligence

20.18 Humans stay aligned while rapidly increasing intelligence

21 Regimes from above

21.1 Intelligence enhancement very difficult in general

It may be quite hard to go much beyond say (human genius + 10%). Either through diminishing returns, or perhaps fundamental computational tradeoffs.

If it takes say a 300 IQ to be able to casually curbstomp humanity from any starting position, but really you start to get diminishing returns around 180, then you're facing a different threat. In this case you could even have AGI around for decades while you're building mind uploading tech, and "catch up" based on larger initial human wealth.

This is still the "autonomous corporations" scenario. Your unaligned AIs are still very dangerous because they are much cheaper to "house" and they can therefore quickly reproduce to take over a large part of the economy.

The faster you get uploading done in this scenario, the more human the future looks, because there's less time for AIs to totally take over society.

21.2 Uploading much easier than ASI

This could happen if ASI is actually incredibly hard and doesn't quite get to ASI from AGI.

Looks sort of like "Age of EM" from Robin Hanson, but you still have to deal with AGI being in the state it's in today.

21.3 Ideal rational behavior favors human preservation

21.4 Alignment is in some sense the default

Perhaps because our big LLM are essentially "living libraries" of all of humanity's knowledge, they will be a lot more human than a random AI.

21.5 Rational behavior not likely even with great intelligence

21.6 Alignment fairly straightforward

21.6.1 For uploaded humans, but not AGI

21.6.2 For AGI, but not humans

21.6.3 for both

21.7 Intelligence enhancement fairly straightforward

21.7.1 For uploaded humans, but not AGI

21.7.2 For AGI, but not humans

21.7.3 for both

21.8 ASI much easier than Uploading

21.9 ASI about as hard as uploading

21.10 Ideal rational behavior favors human destruction

21.11 Alignment very difficult/impossible in general

This one's interesting, because it means that while we won't figure out how to create an AI that's aligned, that AI also won't be able to significantly increase its capabilities without becoming unaligned with itself.

So then you end up with limited AGIs and humans, and everyone knows that it's possible to enhance intelligence but you'll lose yourself in the process….

Which probably just leads to humans making more capable AIs even if the existing AIs don't improve themselves, or a particularly reckless AI just yolo'ing into destructive self-improvement.

It's less in the interest of anyone currently alive to improve themselves and take over the world, which is something.

How would it become known that alignment doesn't work? Other than trying it?

Maybe this just selects for people / AGIs that are fine with dying to spawn something with increased capabilities.

So probably this is the worst of all the worlds, where alignment doesn't work in general. Absolutely every one loses including the first AGIs.

Author: Robert McIntyre

Created: 2023-01-23 Mon 10:33

Emacs 25.3.50.1 (Org mode 8.2.10)

Validate