Practice dummy servitor

I was thinking of constructing a servitor that would have artificial ancestors protecting him as well as an artificial guardian angel. This would simulate a normal humans magical defense. I would place the bodies of the devatas in this dummy to give him chakras. I would make a sigil for him. I would also place a regenerative factor in him so every 5 hours he would reanimate and heal himself along with his protections. Myself and others could throw curses at said dummy to test the efficiency of death curses among other things. Would someone like to be my devil’s advocate for this idea? @Lady_Eva @KingOfHearts616


Sounds good on paper, like communism.

Only way to know is to try. Please place failsafes in it


I will man :slight_smile:. Any obvious weaknesses in the idea?

1 Like

We’d also need a way to monitor it and its status constantly…


That would be where I and another come in. Any other ideas?


Like Communism :joy::joy::joy::joy:


There’s my usual comment that servitors rapidly start to care about their own lives and to have hopes, insofar as they wish to expand their function and experience more, and the more complex they become the more they resemble organic life, so there’s an ethical aspect because you’re going to be inviting people to kill and harm a thing that does not wish to be killed or harmed, and you’re allowing it to happen over and over again, as well.

Just imagine scientists creating a cloned baby that can be tortured and then comes back to life, so they can test new methods of torture, and the point at which they will finally cause death.

What you’re doing is essentially that, because if you build it without the ability to feel distress over what’s happening, and fear, it won’t resemble a human.

I also think you’re coming at the issue of killing people bottom-up, from a low-magick “I am a thing bobbing haplessly in an ocean of other things” stance, rather than evoking omnipotence and acting as a god here, so I think the experiment is unsound in terms of increasing power as well.


The evolution of servitors vary from others. My servitors do not become sentient unless told to. Think of it like computer code that is designed to feel but has no true feeling. The extent to it’s feeling would be reporting where the energy is and the intensity of it. There are different methods to creating servitors and limits to their evolution. Making sure the configuration of the servitor is static and doesn’t advance in the ways mentioned above will definitely be taken into consideration.


Have you seen black mirror? The episode where the code has become sentient and has consciousness? The servitor will more be like this.

1 Like

Programmed to sulk if ignored is literally no different to the hardwired evolutionatry need to avoid being shunned that exists in humans, though.

A large amount of what we think, feel, and do is hardwired (sex, success, popularity; survival, enjoyment, fulfilment of the desire for knowledge), and if you go so far “back of, and behind” the stuff that anmkes you human, you come to emotionaless Source which has no preferences or feelings as such because it has no innate drives.


Would you mind expanding on the first sentence. I’m not sure I totally understand. In my opinion the difference exists in self realization. Without the robot realizing it is here, it exists, it is a simple form of 1 and 0s. Perhaps the point you’re making is even if an intelligence doesn’t have have level 3 consciousness it is suffering nonetheless. Suffering in any form is bad even when it’s a simple if then code? I’m not straw-maning your argument I’m just seeing if I’m on the right page.

1 Like

The thing is, what in us realises we are here and that we exist?

I’ve done spiritual healing for animals, who the consensus used to be that they lack this, but that’s not the case spiritually, and anyone who loves their pets is likely to have seen enough examples of the animal seeming uncertain or showing compassion (working benchmarks of self-awareness), then more advanced research is showing that even chickens (which are vicious little fucks, with tiny brains) can understand empathy, and that dogs are capable of as much self-awareness as human children. Pigs, too.

So that was one whole group of “oh they only work on input-output stimulus and are not self-aware” blown out of the water, as indeed befits the anti-science concepts that originated with the Abrahamic faiths (the only belief system in which animals are totally not like humans).

I’m not here to say what’s good or bad at you, just stating that entities which exist usually strive to become self-directing, and develop aversions and desires.

Plants have been shown to react will to loving words, and to wither when abused repeatedly, just with words and emotion…

Of course people making AI will insist the being cannot know it’s here, but no-one has yet isolated the mechanism in the human brain which has self-awareness, so to say “oh, it can’t happen in any advanced system that in many ways resembles a human brain” is either dumb or intentionally deceptive, since only a few years ago scietists were talking about creating emotions as being the key to true self-aware AI.

But my key point is that people rotuinely have problems with servitors becoming self-aware and decicing to try and get themselves a better life, and that anything that mimics a human as closely as this, is likely to have a headstart on that.

This danger increases as you give it ancestors, who will presumably be motivated to protect a being outside themselves, which is getting heavily into making a thing self-aware since they must by default be aware of a not-me, that can experience and which has a desired state and a state it wishes to avoid.


Thank you for clarifying that’s quite interesting! I understand what you’re saying and will make the necessary precautions :slight_smile:. Perhaps some divination is necessary before going to stage 3.