On this page I’m going to present what appears to be impossible for anyone to achieve elsewhere which is to give decent definitions of glitch and anomaly that are specifically relevant to anyone wanting to THINK about glitch and anomaly possibilities that would relate to an entirely software defined world of self aware, free thinking people.
What is a ‘Glitch’? What would be a good Definition of Glitch for Simulation Argument Investigations?
A Glitch is defined as being a TRANSIENT ‘error’, it’s implied as being serious AND seriously noticeable and perhaps even catastrophic.
Examples of ‘glitch’ possibilities include:
- The power supply to your simulation hardware failing.
- The processing power (bit size, clock rate, memory and data buffering capacity, parallel processing capacity) not being sufficient under all ‘operational’ conditions such that the processing engine fails to calculate everything in time to ACCURATELY and seamlessly render the ‘next’ simulation ‘frame’ to some, many or all of the simulated residents.
How would you Avoid Having Obvious ‘Glitch’ Glitches or Anomalies Presented to a Simulated Population?
However, it is expected that the designers of a Matrix or a Simulation (if they are competent) would have engineered their project to be glitch free. This is likely and for the examples I describe above this would equate to the designers:
- Implementing triple redundancy in backup generators to take care of power outage possibilities and . . .
- Of them implementing ‘simplifying approximations’ to keep processing engine overheads well within super safe ‘operational’ limits.
The next page few pages will go into glitch and anomaly possibilities in more detail including:
I’ll be giving some realistic examples of different catastrophic hardware overload possibilities that could lead to an anomaly or to a ‘glitch in the matrix’ event.
What ‘Boundary’ Limits would a Simulation Designer Implement to Help Avoid Having Obvious ‘Glitch’ Glitches Presented to a Simulated Population?
I’ll also be explaining how for a hand built simulation will have defined limits with respect to the basic functions, parameters and the scale of the environment of the reality because this is necessary to contain the limits of the processing necessary so that the processing overheads always stay within safe limits (else you’ll start getting anomalies all the way through to serious glitches).
I’ll also explain how with a population of free thinking individuals where you would expect at least ‘some’ to have very open ended scripts such that they could end up trying to do ‘worrying’ things. To pre-empt this possibility then a competent simulation designer will likely alter some fundamental parameters of the simulated reality specifically to keep ‘possibilities’ contained.
After the above I’ll also explain the problem of doing ANY of the above alterations when you are simulating a self aware, free thinking thinking population that are accurate copies of your own population.