Zak S wrote:
A) The rewarder would generally want Superman's goodwill, trust, and assessment of the rewarder as a good person. Useless gifts would be tokens of this. My neighbor doesn't think I need cheese, but the generous and spontaneous sacrifice of cheese (Here, have some cheese!) when I do something they like represents a degree of attention and goodwill. This person is communicating an amount of willingness-to-cooperate (a useful resource). Particularly if the gift is a sacrifice for the rewarder. (This was addressed in my previous comments, search "superman""zak s""infinite""chuck".)
You could have just copy-pasted the bit into a spoiler block, so that it's available for everyone's quick reference. Like
this.
Zak S wrote: Superman altruistically saves someone. When does he not get a bonus to their reaction?
Well when the following conditions are all secured (not just one, ALL):
-Savee 100% certain s/he will never need help from Superman again (i.e. no effect on resource).
-Savee 100% certain no-one who provides important resources to savee will ever need help from Superman (if he saves the dentist down the street, your supply of dental care is uninterrupted).
(This one is huge, by the way--in the DC Universe, the entire universe is frequently threatened. This is why often even villains see the point in having Superman around.)
-Savee 100% certain s/he will never be in a position where Superman's positive judgment of him/her would be helpful in securing or maintaining a resource (For example: Superman saves Chuck. Chuck is ungrateful. If Chuck falls off a building again, Superman will still save him. However if anybody asks altruistic Superman "is Chuck a good guy?" for any reason of any importance Superman's negative evaluation of Chuck could affect Chucks access to resources. Also now Supes may be more suspicious of Chuck in any future Chuck-related resource-gathering enterprises.)
(i.e. savee regards "Superman's trust and/or goodwill" as a useless resource)
-Savee 100% certain nobody who could ever even indirectly control (pro or con) his access to resources will ever discover his/her ingratitude.
Anyway, let me see if I formalize it into a nice syllogism.
-If Chuck is not grateful to Superman, then Chuck is 100% certain about certain things. (i.e., ungrateful only if criteria are met)
-Chuck will almost never be absolutely certain about those things.
-Chuck will almost never be ungrateful to Superman.
Hopefully that matches your intent.
Problem: this argument relies rather heavily on a notion of absolute certainty, which is simply not applicable to synthetic propositions. Even if they were, the proposal that "currencies whose continued supply that might be threatened by refusing a given request" includes all the various things listed in your above criteria is
untenable.
The sheer complexity involved in social networks is mind shattering, even if they are represented in a simple, abstract model like directed/weighted graphs. They are so complex, in fact, that one of the classic NP-hard problems in computer science is the
Traveling Salesman Problem. The project of examining all the relationships, resources, and requests in both present and future to see which are relevant and then which are determining is orders of magnitude harder.
Since the problem can't really be solved in a reasonable time, the DM must use of some sort of heuristic. But if that heuristic is a black box, it is indistinguishable from the DM simply choosing a number based on what feels right. Now, there's nothing
wrong with DM's doing that. What it is not is an output of a workable social currency system.
B) If Superman is asking for anything (particularly in a game context), no matter how small, then the rewarder would be given to understand that the favor is (Superman being altruistic) important in some way to Superman to Superman's survival or overall project, even if the rewarder is unsure why.
I think you're doing some magic with
altruistic; i.e., it doesn't mean what it would need to mean in order to get (B) off the ground. Being altruistic does not require that the only requests one makes be motivated by necessity.
(Actually, altruism is a rather sticky subject in general, which is why people are sticking to a rather simple, game theoretic notion.)