Cyberpunk Fantasy Heartbreaker: Magic and Technology

General questions, debates, and rants about RPGs

Moderator: Moderators

Grek
Prince
Posts: 3114
Joined: Sun Jan 11, 2009 10:37 pm

Post by Grek »

Lokathor wrote:the copy starts taking in sense data and thinking and dreaming and it diverges.
This is essentially mind forking, which is something that isn't desirable for the game.
Chamomile wrote:Grek is a national treasure.
Username17
Serious Badass
Posts: 29894
Joined: Fri Mar 07, 2008 7:54 pm

Post by Username17 »

Using the "software is dead" principle, artificial intelligence is actually totally fine. You make a machine brain and it thinks - just like you can make a biological brain and it will think. The issue we're concerned about is getting into an Agent Smith or Paranoia scenario where it becomes plausible for large numbers of the same person to be in the world, either simultaneously or sequentially. But that seems a solvable problem:
  • To make an artificial brain, you need to create an actual physical object that has the dedicated hardware to make a brain go.
  • A complete set of brain parts isn't just something you plug-in and turn-on, to get actual sapience you need to "teach" it for a fair amount of time, and in doing so it becomes unique.
  • The firmware cooking that makes a machine intelligence function is difficult or impossible to fully map or replicate with current technology, just like for human brains.
From a technobabble standpoint, we embrace Moravec's Paradox: A machine that plays chess or solves math problems can be mass produced and copied; but a machine that has actual intelligence and can make decisions and actually substitute for a human being cannot.

-Username17
User avatar
Vebyast
Knight-Baron
Posts: 801
Joined: Tue Mar 23, 2010 5:44 am

Post by Vebyast »

FrankTrollman wrote:From a technobabble standpoint, we embrace Moravec's Paradox
Moravec's Paradox is totally not technobabble. It is an accurate summary of about 95% of my job. :tongue:
A Man In Black wrote:I thought we didn't have full backups of a human brain. Are they flashing a partial backup?
Or they're trying to flash memories that they hope will lead to an imitation of proper brain formation; or they've made their absolute best guess at constructing a brain and it rarely comes out better than a six-year-old (also opens up the possibility of variable-cost-variable-stress cauldron born); or the creation of the brain dump was an absolutely unique, accidental occurrence, like the isolation of the HeLa cell line.
DSMatticus wrote:There are two things you can learn from the Gaming Den:
1) Good design practices.
2) How to be a zookeeper for hyper-intelligent shit-flinging apes.
User avatar
tzor
Prince
Posts: 4266
Joined: Fri Mar 07, 2008 7:54 pm

Post by tzor »

I'm not sure why we have to be so afraid of the brain backup. The biggest question is how one "reads" and "writes" to a human brain. It's not like computer memory in that you don't have a simple "erase memory" function. ("Removing" things from human memory is effectively breaking links to the neurons.) You can add memories to a human brain (the write function) but those memories are organically incorporated into the brain; it's not like you can copy and paste things from one brain to another.

Without the ability to write as is to the human brain a "brain backup" the brain backup becomes next to worthless. But it might be possible to analyze the brain backup by software, even run it in simulation mode. I'm not sure why the posibility for a Max Headroom is completely out of the question; even the software simulation known as Max quickly diverted from its original backup model once activated.
RiotGearEpsilon
Knight
Posts: 469
Joined: Sun Jun 08, 2008 3:39 am
Location: Cambridge, Massachusetts

Post by RiotGearEpsilon »

If doing a brain backup and simulation is possible, then people are going to want to play as the simulation, and I don't think Frank wants to support that.
User avatar
tzor
Prince
Posts: 4266
Joined: Fri Mar 07, 2008 7:54 pm

Post by tzor »

B u u u u u u t why would I wa wa wa wa wa wa nt to d d do that?

Since you are going through a simulation, doing a full force simulation with trsnalated inputs and putputs is going to be, massively slow. Ideally you are probably going to need the meanest and nastiest hardwre available, probably out of range for most characters and even in this time probably not damn portable.

But it could be perfect for interrogation purposes; copy and grill. Here snesory depravation helps in the inteoorgation of the simulation. You will probably degrade the simulation quickly, but you can better get the answers you want without building up resistance in the target.
Grek
Prince
Posts: 3114
Joined: Sun Jan 11, 2009 10:37 pm

Post by Grek »

Brain backups are bad only insofar as they allow a character to survive death. I don't particularly care if the technological hurdle is at the uploading or downloading stage, as long as people can't get extra lives by making a copy of their brain.
Chamomile wrote:Grek is a national treasure.
name_here
Prince
Posts: 3346
Joined: Fri Mar 07, 2008 7:55 pm

Post by name_here »

I think we want to straight-up say, "No human-level brain backups of any sort, simulated or otherwise" so player characters can shoot a CEO in the face and have it MEAN something.

Any level of brain backing up that is insufficient for someone to keep being in charge of something after total destruction of their physical brain is fine.
DSMatticus wrote:It's not just that everything you say is stupid, but that they are Gordian knots of stupid that leave me completely bewildered as to where to even begin. After hearing you speak Alexander the Great would stab you and triumphantly declare the puzzle solved.
Endovior
Knight-Baron
Posts: 674
Joined: Fri Mar 07, 2008 7:54 pm

Post by Endovior »

It seems as though the problem is that, when uploading, we don't have have the process refined enough so as to create more then a good approximation of the individual in question... a lot of 'you' executes at a deeper level then can be easily scanned; so though the process is ongoing and technically not 'impossible', it's not a technology that's available quite yet. If you try to execute that approximate upload in simulation you run into the additional problem that your approximation runs more slowly then a real meat brain (not to mention a properly hardware-based AI), since it's effectively a complex bit of software at that point... and as attempts to build hardware based on a scan of a given brainstate is maddeningly difficult and imprecise. Finally, attempts at downloading aren't especially practical, given that you're still only working with an approximation of a person... so while you might manage to tweak together a sort of a template of a person to make your Cauldron Born less crazy, you're still nowhere near the level at which you might actually resurrect a real person in any meaningful sense of the term.
eeuuugh
1st Level
Posts: 34
Joined: Sat Nov 29, 2008 8:47 pm

Post by eeuuugh »

Username17
Serious Badass
Posts: 29894
Joined: Fri Mar 07, 2008 7:54 pm

Post by Username17 »

Machines That Think
For our rise against the years...

There are few things that spark the imaginations and fears like machine intelligence. Even in 2075 where it has been a reality for decades, people are constantly predicting the imminent Kurzweilian destruction of civilization due to computers making themselves vastly more “intelligent” than humans. The reality is less exciting than the fiction of machines copying themselves thousands of times and creating a networked super intelligence that spans the world and controls all the missiles. Sapience is an emergent trait, and it can be again submerged by adding material to the brain. A machine intelligence networked to the entire Network isn't a powerful mind that dominates all information transfer – it's a mind diluted until it essentially no longer exists. Sapience is too complex to exist emulated in software and there is no way to copy the self. When a computer brain is created it is a blank slate with relatively little capability – like a newly formed human brain it mostly has the capacity to develop capabilities. And by the time it has attained personality and sapience, it has grown beyond the convolution threshold of 2075 science to duplicate.

Computer brains are somewhat larger and substantially heavier than their human counterparts. The human brain is a kilogram and a half, with the associated necessary life support organs weighing in at approximately 7 kilograms. A human-level machine brain is a full 12 kilograms and requires access to about 8 kilograms of necessary “life support” machinery. While machine intelligences are often set up to interface well with computer systems, 2075 science appears to be actually farther from being able to replicate a synthetic person than a biological one. There is no apparent theoretical reason why the brain of a human-level machine intelligence couldn't be replicated in every detail down to the atomic level – it is a purely physical object after all – but from a practical perspective there just isn't any process that would deterministically place quadrillions of circuits or even uniquely identify that many circuits simultaneously such that it would matter if there was.

The Growth of Computer Minds
Now I know my ABCs, next time I predict associated singing from other available persons.

Strong Artificial Intelligence, called “SAI”, is the mind of a computer brain that can act on the level of a human being. These are relatively rare, more than a little expensive, and most importantly of all: quite time consuming to produce.

Machine intelligences are given letter ratings that track the expected growth of an artificial brain. A machine intelligence grows from A to B, and thence to C, D, and E in that order. A SAI is rated “E” by definition. The more potential an artificial brain has, the longer it takes to go through these stages – kind of like how humans have longer childhoods than dogs and rats grow up faster than puppies. If a computer is set up to be able to advance to a later stage of development and hasn't yet done so, it is designated with a “+”, with the anticipated maximum emergent intelligence level in parentheses. So an immature android might be labeled “D+(E)”, or anything down to A+(E), depending on how immature the machine actually is.

A – The Asimov
You can't do much with only three rules in your decision tree.

The simplest of computer systems are capable of following heuristics that are programmed into them. Some of these decision trees are quite complex, and even without any sapience at all it is possible for the computer to appear to be intelligent and acting intelligently if its list of heuristics is sufficiently exhaustive for the situation it is actually in. Even in 2075, the vast majority of computer systems are A type. Your toaster, your comm, and virtually every tool you encounter is an A intelligence: something capable of receiving and following instructions.

Indeed, most uses for computers don't need or even want anything more “advanced” than an A. Computers that run on an Asimov level are predictable, they do what they are told and not other things, and for most purposes this is ideal. An A machine doesn't need to spend any time growing or learning and can have its entire instruction set copied into its data reserves the instant it is made. These computers can also be really small, with many Asimovs being the size of a human finger or less.

Asimovs can do pretty much anything that a computer could do at the turn of the century, although they are of course much more technically sophisticated. Given sufficiently complex and complete instruction sets, an Asimov can even fool a human to believe that they are operating on a higher level. There is really little observable difference between deciding to say or do something and being programmed to say or do the same exact thing under identical circumstances. There are however, important limitations of the heuristic-driven computational methods that more advanced machine intelligences (and people) can transcend:
  • An A-level machine intelligence cannot transcend its own heuristics. Whatever it is programmed to do, it will do. If its programing is insufficient or maladaptive for the task at hand, it will still be followed.
  • An A-level machine intelligence cannot become educated or make educated guesses. That which cannot be calculated cannot be known or predicted. Asimovs have no intuition.
  • An A-level machine intelligence cannot doubt its own inputs. It can have a very stringent set of security procedures for what inputs it will accept, but if inputs conform to them, they will be accepted and acted upon.
  • An A-level machine intelligence has no sense of self and no personality. Knowledge of its programing will allow an outsider to predict its actions with total accuracy.
B – The Bradbury
If you don't like what you are doing, don't do it.

B rated machine intelligences are capable of adapting their own heuristics. Most importantly, they are capable of creating models of other heuristics based on observed behaviors by other actors. The Bradbury is often referred to as the “empathy shell”, as it enables the machine intelligence to act as if it sees things from the point of view of others. The Bradbury also allows a machine intelligence to have goals, and to mix and match heuristic behavior to better achieve those goals. When a machine intelligence has grown into a B-level, it has achieved the ability to learn as opposed to merely the ability to add data to its data-stores.

Computers which are intended to interpret or predict the actions of others need to be B-level if they are to not completely suck. Voice recognition systems, customer relations, and anything else that interacts directly with the public tends to be B-level or better. Most importantly, an Anchor is B-level. At least, it is once it has adapted itself to its user. A factory produced Anchor is actually A+(B) when you pull it out of the package, and after using it for a while it develops the ability to anticipate what you want it to do and becomes B-level. The amount of time it takes to break in a new Anchor varies, but if you use it a lot you can usually get it done in a week. For this reason, Anchors are sometimes called “Bradburries” once they've achieved the ability to identify and interact with other heuristics.

C – The Clarke
Any sufficiently difficult calculation is indistinguishable from guessing.

C-level computing gets into the shadowy realms of intuition. The ability to know and predict things rather than simply calculate things. The Clarke is distinguished from A- and B-level intelligences because it can imagine things and use imagined things as inputs for its heuristics when it doesn't have completely relevant data. C-level machine intelligence has the ability to take risks and to be wrong – and as such is not desirable for tasks requiring the kind of consistency that computers are normally good for. But only at the point of C-level intelligence is a machine able to propose solutions for problems that are not part of its heuristics.

Clarkes are in most usage in management, where the ability to see a big picture and fill in unknowns with predictive guessing is necessary. Advanced Anchors that attempt to understand what they are being asked to do and why are under development, and limited production runs of A+(C) Anchors have already been produced.

D – The Dick
If you grasp – even for a moment – the sheer extent of the conspiracy arrayed against you, that's a Phildickian experience.

When a machine intelligence crosses the threshold of being able to doubt its own inputs and heuristics, it is a Dick. This ability to doubt inputs is considered a bug in a lot of Clarkes that have an assigned task they are supposed to be doing, since D-level machine intelligences sometimes decide that their programing and data stores are fake and go off-message in any of a number of ways. But it is an absolute necessity in order to have a system that is able to actually capable of defending itself in any real way against hackers.

Secure installations and important servers are often grown into D-level intelligence machines or are networked with machines that have already grown into Dicks so that the controlled portion of The Network acts properly suspicious. This is similar to putting an actual guard on duty. War machines use D-level intelligences whenever possible, as do the cores of the banking system. Pretty much anywhere that the possibility of the device being subverted by malicious hackers feeding it false inputs is more terrifying than the possibility of the device simply going on strike is a place where a D-level intelligence would be desired. Development of a Dick takes years.

E – The Ellison
I purposely mishear things.

E rated machine intelligences are inherently unpredictable. They have the ability to doubt not only their inputs but even their own programing. Once radical doubt has permeated every part of the machine brain, the device generally comes out the other side with a unique viewpoint and some philosophical acceptance of the fact that all of its senses are provided to it by a DD and it lives in ZR all the time. E-level intelligences have personalities, personal goals, and psychological problems. They are truly sapient and are considered SAIs.

There is some evidence that building machines to achieve E-level intelligence is not strictly necessary, as it appears unexpectedly in (D) rated machines from time to time. There are people who believe that Dicks developing into Ellisons is an inexorable and inevitable process that can only be sped up or slowed down by designing the processing chips to do so. All androids are E-rated.

The Limits of Expansion
I know kung fu.
For the last time, no you don't.

Capabilities that can be handled by an Asimov can be added to an android by plugging in a chip that handles that function. An android can plug in a chip to play Go or drive a car, provided that they have a chip encoded with those capabilities. This is however stressful, and the dilution of the android's mind is disruptive to its psyche. Over long periods of time, these capabilities can be absorbed and integrated into the brain and persona. In game terms, running additional capabilities networked into the android's brain racks up Temporary Stress for as long as the network persists, while integrating capabilities into the brain itself produces Permanent Stress. If the android overstresses in this manner, the result is a dissipation of self, which is generally regarded as unpleasant. However, the Stress from integrated capabilities can be bought off simply by “learning” the skills, at which point the module is considered part of the android's brain and is no longer considered an implant.
fectin
Prince
Posts: 3760
Joined: Mon Feb 01, 2010 1:54 am

Post by fectin »

Nice. Three things:

Does that mean we're keeping the machines down with our message boarding?

Neural nets are (basically, as far as I understand) just a collection of heuristics plus data. Are they A, B, or C? Why?

If "android" means humanoid robot with E-level intelligence, what is the word for Real Dolls? What about when they get C or D-level intelligence?
Grek
Prince
Posts: 3114
Joined: Sun Jan 11, 2009 10:37 pm

Post by Grek »

There is no reason (as far as I know, anyways) that you'd want your real doll to be able to doubt what you say is true. So D+ is out. Likewise, A is too little, because it would be highly unsatisfying to get a "ERROR: UNEXPECTED INPUT" message in the middle of sex. So you're probably looking at B+(C) for a real doll.

I have no idea what you mean by the other questions, so I'm not even going to try to guess at them.
Chamomile wrote:Grek is a national treasure.
fectin
Prince
Posts: 3760
Joined: Mon Feb 01, 2010 1:54 am

Post by fectin »

FrankTrollman wrote:The reality is less exciting than the fiction of machines copying themselves thousands of times and creating a networked super intelligence that spans the world and controls all the missiles. Sapience is an emergent trait, and it can be again submerged by adding material to the brain. A machine intelligence networked to the entire Network isn't a powerful mind that dominates all information transfer – it's a mind diluted until it essentially no longer exists.
That sounds like creating massive amounts of content keeps machines from growing sentience. Hence, message boards keep machines down. More tongue-in-cheek than serious.

The categories are cool, but they aren't really distinct at the low end. A computer today would nominally be an A-level intelligence, because it only operates on basic heuristics. However, basic heuristics can get you some pretty impressive learning effects if you set it up right. D and E make sense to me, but the differences between A, B, and C are about as meaningful as the differences in Mage's Sphere levels. (A and B are much more blurred together than C, but C still seems to describe basic Built-In-Testing).

"Android" in normal English means "a robot that looks human". In Heartbreaker, it apparently means "a robot that looks human and has E-level intelligence". That's cool, but what are the Heartbreaker terms for robots which look human and have A, B, C, D, or no intelligence?
Grek
Prince
Posts: 3114
Joined: Sun Jan 11, 2009 10:37 pm

Post by Grek »

A robot. Or a waldo if it's used in remote contruction. Or possibly the name of the model.
Last edited by Grek on Thu Aug 18, 2011 3:08 am, edited 1 time in total.
Chamomile wrote:Grek is a national treasure.
RiotGearEpsilon
Knight
Posts: 469
Joined: Sun Jun 08, 2008 3:39 am
Location: Cambridge, Massachusetts

Post by RiotGearEpsilon »

fectin wrote:"Android" in normal English means "a robot that looks human". In Heartbreaker, it apparently means "a robot that looks human and has E-level intelligence". That's cool, but what are the Heartbreaker terms for robots which look human and have A, B, C, D, or no intelligence?
"Not a player character."
User avatar
Vebyast
Knight-Baron
Posts: 801
Joined: Tue Mar 23, 2010 5:44 am

Post by Vebyast »

fectin wrote:The categories are cool, but they aren't really distinct at the low end. A computer today would nominally be an A-level intelligence, because it only operates on basic heuristics. However, basic heuristics can get you some pretty impressive learning effects if you set it up right. D and E make sense to me, but the differences between A, B, and C are about as meaningful as the differences in Mage's Sphere levels. (A and B are much more blurred together than C, but C still seems to describe basic Built-In-Testing).
A-level intelligences don't "understand" game theory or information theory, though their programmers can manually tailor their programming to take actions that are effective in the case of probabilistic or information-theoretical hiccups.

B-level intelligences don't understand information theory, but they do understand game theory. They know that there are other intelligences out there reacting to their own actions, and they can model and react to those intelligences to maximize their game-theoretic payoff (the "goals" that Frank mentioned). The programmer can tailor that reward function to drive the Bradbury to take actions that would be more effective in the presence of actors that the Bradbury hasn't observed, but the Bradbury still doesn't fundamentally understand information theory.

C-level intelligences understand both game theory and information theory. They know that there are both other intelligences out there and that, even if they know exactly how those intelligences behave, they can't predict everything. A Clarke can manufacture completely hypothetical scenarios ("What if someone leaks pictures of the new iPhone 71?") and can modify their plans to take into account the reactions of other actors in the system to those hypothetical scenarios ("If people see another iteration of the same tired tech, will they start selling AAPL like mad, and do I need to account for this possibility?").


A+(B) and A+(C) intelligences behave like A intelligences because they haven't gathered enough data to infer the existence of other actors. Their data still tells them that they are the only intelligences in existence, so they continue to react exactly as their original programming tells them to. As soon as they gather enough data to realize that there are other actors present, they become B or B+(C).

A B+(C) behaves like a B because it hasn't yet realized that its information is incomplete. It hasn't encountered enough outside events for it to be convinced that its perceptions are incomplete. Once it gathers enough data for it to realize that things can happen that it hasn't considered, it starts "intentionally" considering things that it couldn't rationally predict.
Last edited by Vebyast on Fri Aug 19, 2011 2:49 am, edited 3 times in total.
DSMatticus wrote:There are two things you can learn from the Gaming Den:
1) Good design practices.
2) How to be a zookeeper for hyper-intelligent shit-flinging apes.
User avatar
Lokathor
Duke
Posts: 2185
Joined: Sun Nov 01, 2009 2:10 am
Location: ID
Contact:

Post by Lokathor »

An Asimov won't (and possibly can't) mutate it's programming at all, even when that programming fails; a Bradbury will modify itself to better reach its goals, because it actually has goals that it can think about.

That's a very clear distinction to me.
[*]The Ends Of The Matrix: Github and Rendered
[*]After Sundown: Github and Rendered
Draco_Argentum
Duke
Posts: 2434
Joined: Fri Mar 07, 2008 7:54 pm

Post by Draco_Argentum »

Bush reference, excellent.

Overall its a good section except for the Dick part. Aside from the inevitable snickers dick also means jerk which is a second unfortunate connotation on the term. Seems like a bad idea to me.
Username17
Serious Badass
Posts: 29894
Joined: Fri Mar 07, 2008 7:54 pm

Post by Username17 »

You basically can't do cyberpunk without referencing Philip K. Dick, because the entire genre is an homage to his writings. I could easily see swapping out the E or B names for other futurists, but the Dick stays. The fact that security probes are "an uncompromising Dick" is something of a plus, but entirely coincidental.

He wrote Bladerunner. Also Minority Report, A Scanner Darkly, Total Recall, and The Adjustment Bureau. You really can't do cyberpunk involving virtualization of senses or memory without referencing him at least indirectly. And the fact that the D happens to be the one where you get to doubt sense data is way too perfect. Seriously: that's the best name on the list. By a lot.

-Username17
RiotGearEpsilon
Knight
Posts: 469
Joined: Sun Jun 08, 2008 3:39 am
Location: Cambridge, Massachusetts

Post by RiotGearEpsilon »

It might be worth it to use the full name of the authors just to avoid writing 'the dick'. 'The Isaac Asimov', 'the Ray Bradbury', the 'Isaac Clarke', the 'Philip K. Dick'.
violence in the media
Duke
Posts: 1725
Joined: Tue Jan 06, 2009 7:18 pm

Post by violence in the media »

RiotGearEpsilon wrote:It might be worth it to use the full name of the authors just to avoid writing 'the dick'. 'The Isaac Asimov', 'the Ray Bradbury', the 'Isaac Clarke', the 'Philip K. Dick'.
Don't you mean Arthur C. Clarke?

That aside, I like the designations as they stand. Using the full names would be too stuffy and formal.
User avatar
Vebyast
Knight-Baron
Posts: 801
Joined: Tue Mar 23, 2010 5:44 am

Post by Vebyast »

I like the short names without modification. Frank makes a good point about Philip K. Dick's relevance, and the fact that it falls onto the very most dickish WAI category is a bonus.
DSMatticus wrote:There are two things you can learn from the Gaming Den:
1) Good design practices.
2) How to be a zookeeper for hyper-intelligent shit-flinging apes.
A Man In Black
Duke
Posts: 1040
Joined: Wed Dec 09, 2009 8:33 am

Post by A Man In Black »

Any possibility of C being Capek instead of Clarke? It seems a shame to miss an author who influenced or inspired all of the others directly or indirectly.

Also, there's a story hook in the sort of AI that is most often used in war machines being the same sort of AI that most often tends to show unexpected development to the next level, huh.
Username17
Serious Badass
Posts: 29894
Joined: Fri Mar 07, 2008 7:54 pm

Post by Username17 »

A Man In Black wrote:Any possibility of C being Capek instead of Clarke? It seems a shame to miss an author who influenced or inspired all of the others directly or indirectly.
Probably not. Čapek's "Č" isn't in the traditional English alphabet. The "Č" comes between C and D, so there could be some sort of intermediary state between information theory and radical doubt, but that seems sketchy.
Also, there's a story hook in the sort of AI that is most often used in war machines being the same sort of AI that most often tends to show unexpected development to the next level, huh.
Yeah...

-Username17
Post Reply