Recent posts by vikaTae on Kongregate

Flag Post

Topic: Serious Discussion / Actual Minds in Videogames

Originally posted by Tulrog:


their sentience and self-awareness is arguably an illusion

from the perspective of the NPCs inside the game-verse, they ARE real people

I have trouble mentally digesting those two at the same time.

As a human I have experience with being self-aware. Especially in those very rare moments where you suddenly turn off the auto pilot and are completely aware that you exist. The idea that this could be some sort of illusion just doesn’t make any sense to me.

Well, we know your own self-awareness is arguably an illusion. It would be no different for the AI.

I worded things that their sentience and self-awareness being arguably an illusion, ironically because I thought if I didn’t include that nod to our neurology, someone else would call me on it.

In the case of the AI, it would be a very similar process to our sense of self. ie, it’s provided wholesale from another source that is not our conscious mind, and decisions are enacted before the conscious mind has thought them through.

Perhaps more pertinently, for reasons of conserving computing resources, the AIs would likely all be one single mind, with individual personalities and walled-garden mentalities – a bit like multiple personality disorder. This would be done purely because it’s cheaper in terms of system resources, to only initiate one instance of a massive database, and use aliases and queries to look for relevant information in it, rather than initiate 10 or 20 separate instances of that database. You can behind the scenes, get away with all kinds of memory saving tricks and optimisations if it’s all really one database.

But to the AIs themselves, it would all feel like they had their own minds, the same way as to us, it really feels like we’re in full control of the choices ‘we’ make. It’s all illusion, but it feels real to the person in the driving seat.

Take an attack on villagers as an example. If you attack people in a town in Skyrim the civilians run away and the guards attack you. That’s a pretty realistic reaction. But that doesn’t mean they “feel” anything. This is what I would call an illusion. But if I understand you correctly your AI would actually feel something. Not only “feel” but actually feel. And this doesn’t fall into the category of simulation to me anymore. Otherwise everything is a simulation. Once again the problem are convincing measurement results.

You understand me correctly. However, it is still a simulation, in the terms that these minds, their entire world, their lifetime of memories and experiences, all exist inside the computer in your house. It is encapsulated within that machine. An entire reality with no connection to our own, save for when the player decides to interface through their player avatar.

That’s why I chose this particular model for interfacing with an AI. All the others have the AI existing in our world, interacting with us on a level playingfield, with degrees of freedom in influencing the world we all share. This particular use of AI doesn’t have any of that. It is solely through the player that they experience anything from our world, and even that is experienced in their world, not ours.

It becomes solely up to the player how they are treated. Whether other AIs in our world have legal rights or not doesn’t matter here, as they have no means to even find out about such rights, unless the player tells them. They don’t even know they’re AIs. Since they have no recourse to a higher authority (other than the player) whatsoever, and nobody is going to find out what happened to them unless the player tells, it offers perhaps the highest ultimate amount of freedom in interacting with other sapients that’s ever going to be practically possible in our world. Looking at how those players would react in such a scenario is interesting to me – particularly since I accept it as a given that this tech will come.

I’m certain you don’t need a biological body at all.

That’s the point which makes or breaks the whole idea. I tend towards your positive outlook but am in no way as sure as you are. Your example with the artificial limb is hopefully a pointer in the right direction.

But we both know the critical part is the brain. If you can’t replace the brain with a non-biological component digital minds will be probably an illusion until the end of times. Maybe it is my rather shallow knowledge of how the human body works but don’t chemical components like hormones also play a big role on who we are and how we feel?

Yes, but the thing is, whilst the brain is an electrochemical machine, the way those chemicals react on its components is quantifiable. They’re repeatable, since the same chemical is going to trigger the same results from the glial cells and neurons uponeach exposure.

Since we know what the effect of each chemical will be, if we’re creating a virtual version of a brain, we can then use a virtual version of those chemicals – since all that matters is what those chemicals actually do to the thought process, we can replicate their effects directly, without requiring the actual chemicals themselves.

And there might be also problems if something like a soul exists and who you are isn’t the same as the body you have.

Even if souls exist, it should be possible to create an artificial being without a soul whose actions, thoughts and behaviour are indistinguishable from an artificial being with a soul (or an organic being with/without a soul for that matter). So, does the soul really matter in this equation?

Also out of curiosity. Since you talk to people who have artificial body parts. Did you ask them if feeling with those parts feels somehow different? Not really on topic but very interesting nonetheless.

They do, butthat’s mostly a bandwidth problem. Going from a natural 250,000 sensors to an artificial 14, is going to feel a little different. For some the replacement just feels wrong but likewise the exact same problem is encountered in an organic transplant for a portion of the population. So it’s not something organic hasthat robotic doesn’t. It’s something we’re still missing when trying to connect up to the body. There are several theories why this is, but I’m not going to bore you with them here.

What I would find extremely convincing is if you could extend my body into the digital environment. Not a transfer but a connection. This way I could experience myself how it is to “be” a virtual body.

Well, SimStim’s still beyond us (Simulated Stimulation, what you’re describing). However, there have been a plethora of experiments from the 1960s up to the current day on integrating an outside observer into a virtual environment as if they’re there in the flesh, to varying degrees. The results are far beyond this thread to discuss, but maybe something I’ll bring up in another one. (I’ve done some of those experiments myself in a professional setting, so I could probably talk you to death about them ::grin:: )

But if you move away from that rather philosophical thought experiment to a more realistic case there will be ways to restore characters. At least partially.

Partially, I agree with. Fully I disagree with. But I’ve already outlined my rationale for why earlier in the thread.

You also asked if this plays a role for morality. I think the answer is yes. There is a reason why we consider a murder to be something more bad than a theft. You can never completely undo the damage of a crime. But death can’t be repaired in any way. So being able to restore a person at least to some degree is better than nothing. And that would probably lower some thresholds to not so moral behavior.

Which for me is the really interesting part. As you said after this, I’ll have to make sure the survey is anonymous. That’s a given for the nature of what’s being asked of people.

I’m particularly interested in doing this to then compare to what the situation is actually like when this is beginning to be a reality – to see how great the influence of mirror neurons is on the eqution. I.e. a person thinks they can mutilate people guilt-free, but how does that actually hold up when they’re full witness to the pain they’ve caused and can see the suffering up close and personal?

To put your idea into a different context. How do you think would people behave if you gave them godlike powers like being able to jump back in time and immortality?

Slightly flawed comparison, as doing that would have knock-on effects for the world they themselves belong to. Doing the same for an encapsulated world would have no such knock-on effects. It’s completely isolated from the player’s outside life.

Flag Post

Topic: Serious Discussion / More evidence that Autism Speaks is a hate group.

Originally posted by onemorepenny:

“The problem in your thinking here is that research into the functioning of the human body IS a finite system…”

The body is finite, but our method of solving problems using language is infinite. This is what I meant. We are not trying to find organs in the body and label them.

It has nothing to do with the capability of language to find an infinite number of words to describe the same thing. It is exactly like trying to find organs and figure out what they do.

Only in this case, it’s more trying to find out which genes are involved in autistic spectrunm disorders. What those disorders actually manifest as in the brain. Which proteins are involved, and how they are involved. It’s breaking those organs down into much smaller pieces than the whole, where they get all complicated and interconnected, then figuring out how they tie together, particularly when a certain kind of thing has gone wrong.

It is exactly the case that you’re reverse engineering a very complex system in order to find out the causes of a complex range of conditions under the same banner for how the symptoms present themselves. Similarity in symptoms does not always mean similarity in causes.

And please do cite any relevant, repeatable progress achieved to help nearly all affected. Not the usual, “We have made progress, but there is more work to be done.

Autism research is not my personal area of expertise. So I’m not going to be able to point you to the latest reseach with any accuracy, nor am I going to be able to list for you every study that has ever been done. Still, five minutes looking through google provided the following:

Differences between the gut microflora of children with autistic spectrum disorders and that of healthy children

Hypothesis for a systems connectivity model of autism spectrum disorder pathogenesis: Links to gut bacteria, oxidative stress, and intestinal permeability

Model of autism: increased ratio of excitation/inhibition in key neural systems

Unbroken mirrors: challenging a theory of Autism

Why the frontal cortex in autism might be talking only to itself: local over-connectivity but long-distance disconnection

Hopefully from these few papers, you get an idea of the sheer scale of the problem being tackled. I’ve chosen those specific papers, as many if not most of them originated from discoveries made in other experiments whose results were other than what was predicted. One’s actively dismantling someone else’s theory of how autism functions.

It is only through trial and error – both the trials and the errors – that science advances. Expecting immediate 100% successful results to a complex and ill-defined problem is…unrealistic. The errors and blind paths help us define the problem in the first place so we can better tackle it.

Also yes, this takes money. Researchers need to be paid. Electric costs need to be paid. Building maintenance, equipment costs, research material costs, journal publishing costs … it all adds up, and the end result is not research being done for free. That model simply does not work for anything more complex than daydreaming in one’s head.

Flag Post

Topic: Serious Discussion / Hillary Clinton is paying people to "correct" people's opinions online

Karma, I think they’ve always been as easy to start. However, the internet and related technologies have made it easier than ever before, for information to spread between geographically isolated populations. What Bombcog is saying, is its very strength at spreading information beyond geophysical boundaries, is applied eqully to education and misinformation – the net is neutral, and does not judge the value of information being passed or even its accuracy.

A secondary problem, is people like yourself, like Bomb, like myself, are in the minority. We read sources through, compare and contrast evidence. Look for multiple sources to understand, or seek out the raw data. A majority of people don’t do that. They were told by [trusted authority source] thatthis is the Truth, so they don’t question it. Easier access to information is irrelevant if they don’t feel the need to look.

For examples, consider many of the transient posters we’ve had in SD. They try to argue, before finding out that SD is not really right for them, by googling articles that they think back their stances up, don’t read beyond the first one or two lines then post their ‘evidence’ only to have their argument torn apart by those of us who actually read their provided source in its entirety and find it disagrees strongly with the person who posted it.

This is common: People reading only a ‘synopsis’ of a document (even when it’s obvious the ‘synopsis’ is just click-bait wording and not an actual synopsis at all) then assuming they undersand the whole document, and not bothering to read any further, are the actual problem. They wish for bite-sized, easily-digested snippets of information they don’t have to think about.

Giving such people unlimited access to educational resources they’ll never bother to use is useless; all the internet does in that instance, is make it much, much easier for the misinformation to spread, as it’s usually more paletable than actual information can be. Actual information demands nuance, misinformation is black-and-white and free from nuance. Misinformation is always going to be simpler to digest with less thinking required or even encouraged.

Really, to make a dent, we’ll have to tackle how people are taught to process information, encourage a culture of exploration and intellectual growth. We don’t really have that in the mainstream at the moment, and that’s the source of the actual problem.

Flag Post

Topic: Serious Discussion / More evidence that Autism Speaks is a hate group.

Originally posted by onemorepenny:

Repeated research failures are never valuable. You assume otherwise thinking this to be a finite system.

The problem in your thinking here is that research into the functioning of the human body IS a finite system. The system is very large, but there are only a finite number of possible genes in the genetic structure, only a finite number of genomes in the body, only a finite number of possible epigenetic changes, only a finite number of possible ways for the quarter million or so proteins to mis-fold… you get the idea.

Each attempt to devise a cure for autism thus improves our understandings of the inner workings of this incredibly large and complicated, but still finite system. We learn more about which proteins are and are not involved, by targetting them and finding out what causes them to mis-fold, even if it does not directly lead to a cure. We learn more by backtracking the knock-on effects from an attempted treatment that failed, and seeing what caused it to fail and why.

Each failure teaches us more about the inner workings, and opens up other possibilities for research that had not been considered before (because we did not realise those elements were involved, until the failed experiment indicated such).

As such, the failures are incredibly valuable, and inevitably lead to a deeper understanding of the problem, than we would have had if we’d created a solution that worked out of the gate – that solution would inevitably have not worked for everybody, and we wouldn’t understand why it even worked in the first place.

I feel the entire research field is very laxed as of now shown by the utter lack of anything substantial for the past couple of decades other than technological advancements(iPhone makes us happy)… there is a sense of comfortability, satisfaction, and any additional research is aimed at furthering them, not because there is an actual necessity from a researcher’s point of view.

Whelp, the one thing that is certain from THAT statement, is that you have absolutely no clue what you are talking about, and have not been following any of the major journals whatsoever, nor speaking with any of us actually involved in research projects at the post-doc level and beyond.

Flag Post

Topic: Serious Discussion / More evidence that Autism Speaks is a hate group.

As I said before, onemorepenny, a research failure is extremely valuable.

You learn from each failed line of reasoning, and understanding why that approach failed is instrumental in slowly uncovering the actual mechanics of what’s going on. Each failure literally helps pinpoint an eventual solution.

If we succeed right out of the gate, great. We have a solution. But why does it work? What’s really going on? How can we adapt it?

With failures, we learn much more, and can develop solutions much more accurately.

Flag Post

Topic: Serious Discussion / Actual Minds in Videogames

Originally posted by petesahooligan:

But chess is an embodied environment. The players utilize avatars to simulate physical movement based on player motivations and environmental observation. There is cognitive reasoning based on dynamic situational conditions, among other things.

What is your definition of “fully embodied?”

I define fully embodied as, well, as fully embodied. Whole body there, and delivering sensory feedback.

The context of this conversation is not (as I understand it) to discuss “fully embodied” creatures. I would take that to mean robotics and shit. My understanding is that we’re discussing the creation of passable human simulations.

Yea, that’s another relative knowledge faux pas on my part. Sorry about that :/

You’re right in that the human body you have now is your embodiment, it is the vessel through which you perceive the world. If I was to make you a robotic body and transfer your mind into it, that robotic body would be your embodiment, as your mind would see itself as being entirely within that robotic body and receiving sensory information from it – that robotic body would be you.

What I’m talking about is another type of embodiment called virtual embodiment. The only difference between it and what you would consider normal embodiment, is thatall the hardware that makes up the body is replicated by software instead. The embodiment is just as full, just as total from the point of view of the mind inside that body, but this time the embodiment is ‘inside’ an avatar, with a mess of associated code that replicates the function of the non-conscious part of the brain, allowing that mind to control that body and receive feedback from it.

You still have that same mess of code right now btw. Your hindbrain is hard-coded in function, and won’t change. It serves as your interface between your mind and your body. All virtual embodiment does is takes those hindbrain functions and reroute them to a software body rather than a hardware one.

I’m not being obstinate. I’m pointing out that there’s a more meaningful (and I think exponentially more interesting) consideration on what programmatic factors would be required for creating a simulated human.

I know. I agree. However, I specifically chose a virtually embodied NPCs in an encapsulated environment for a reason. It is that specific application of AGI I wished to discuss, precisely because the result is an AGI that is entirely disconnected from our world, but one which players can interact with through a gaming interface.

It’s a unique situation, and one which hasn’t really been covered. As such the other, more over-arching uses for AGI are outside the tight focus of the thread I’m trying to keep to. They can be discussed in their own threads. But because the situation is so uniquely different in this one, this specific situation is the one I wish to discuss.

No, you’re wrong. It actually is true in our world.

It really isn’t. As you go on to state yourself:

We know that people evaluate stimulus subjectively. We relate to each other because we perceive similarities. It’s basically what “relate” means. We see how we are alike. When we relate to each other, we assign others to a group in which we belong, or to a group in which we do not belong.

Thus we’re not automatically seeing everyone else in the world as our equals. It’s where racism, sexism, ageism, etc(ism) come from. You said morality in the simulated world could only be considered if everyone saw everyone else as their absolute equal. Again, this is not true in our world, and you even acknowledge it isn’t. So why then do you insist it has to be so in a lower-order world?

I think this idea is interesting. I think you accidentally reinforced my earlier claim; you cannot create a “simulated human” with “human-like experiences” without simultaneously creating a rich environment.

Sortof. You can use smoke & mirrors a lot to cover the gaps. Eg, if the NPC touches an object with the texture of fine wood set, intermediary code can kick in telling them their fingers are feeling the fine grain of the wood. The fine grain doesn’t actually exist in the world, you’re just fooling the sensory system to think it does. You could equally use the same mechanism on the player, to equal effect. (A little more complex for the player since their embodiment is outside the simulation and cannot be modified directly by the simulation’s code. Still doable though, just more convoluted to achieve.)

Imagine, for example, walking down the street and taking an accidental turn only to find that the street you mistakenly found yourself on was in the process of being rendered. You would probably think you were losing your mind. This is what I was getting at when I was pointing out that you cannot create a passable human simulation if you do not allow them the desire to join the circus. Without their free will, and a capacity to self-realization and reflection, and a desire to be philosophical or spiritual, you are going to create an “other” that will not pass as a believable human.

Why do you keep insisting they don’t have free will? It’s a cornerstone that they have to have free will. Else they’re not believable individuals.

However, they also have memories (initally fabricated for them, later their own experiences) and a personality (fabricatedfrom whole cloth for them at creation, modifiable by experience afterwards). Between these two, they create a foundation of identity that greatly shapes future decison making.

Sure they could run away and join the circus, and that would be possible – if they were subject to situations that made them consider the idea, in spite of the gestalt of their previous life-path (both fabricated & real as they would by definition not be able to tell the difference between the two).

I admire non-human intelligence. I think apes, dolphins and elephants are bad-ass creatures that don’t get enough respect. But I also think it’s foolish to anthropomorphize their life experience in human terms… it’s hubris. We know that these animals have different perceptions and different sensory awareness than us, and therefore their sense of self-identity is almost certainly something that we would not recognize or relate to. We only relate to those traits in them that we recognize in ourselves.

You specifically said they don’t have feelings. That’s the bit I object to. Pavlovian conditioning wouldn’t work if animals weren’t capable of feeling emotion. Yet pavlovian conditioning is something used extensively right across the animal kingdom. It demands an emotional response association to function.

The rest of what you said I agree with. I will also state again that the NPCs don’t have to be human. Wht they have to be is embodied in a form appropriate to their race, with emotional cues and thought processes expected of their race. Human is one option. There are limitless others.

The only constant would be that they be self-aware, sentient, sapient individuals, at least from both their own POV and the player’s.

Flag Post

Topic: Serious Discussion / Sixth Extinction

Originally posted by ImplosionOfDoom:

Hey if we’re already genetically modifying our crops and livestock, why not use that same tech to try to save species that won’t be able to naturally adapt fast enough to save themselves?

How does that work with species such as polinating insects, where we don’t actually know why they’re disappearing in the first place?

We lose the bees, and our crops all fail. It’s a ‘minor’ problem there’s no ready solutions to.

Anyhow, to quote bombcog:

Can we stop talking about science fiction and focus on the actual, real, data-supported mass extinction barreling down on us?

It’s an unpleasant thing to contemplate. The majority of the human race is not going to survive. That’s pretty much certain. The question is, what can we save? Ideally I’d like the essence of our civilisation to survive, and our hard-earned knowledge with it.

I have a perhaps slightly unorthodox approach Bomb, in that I’m not wholly concerned if the human race survives, so long as people do. As such, its a race to mature technologies capable of ensuring our survival in some form, before the inevitable collapse.

A metaphor I’ve oft-used over the years, is that our situation is we’re standing on the blade of a dagger, the other side of which is resting point-down on a mountain peak encircled by a ravine. We made that dagger ourselves, and got ourselves in our current predicament. Our tech, our natures, our society all contribute to it. What we’re tryingto do is strengthen that dagger, anchor it in its current position more deeply so we’re less likely to lose our balance and fall. We’ll fall eventually, but the longer we can maintain balance, the more likely we’ll be able to figure a way out that doesn’t involve plunging into that abyss.

That’s my stance on the extinction level event really. Aware it’s coming, and trying to help maintain balance as long as possible whilst searching for a way for our civilisation to at least survive the event.

  • Biosphere research is one possibility. If we can maintain a stable biosphere, then we can work on maintaining a stable biosphere in low Earth orbit. An ark up there is pretty much immune to whatever calamity befalls the Earth, short of a complete and irreversable loss of its own biosphere. Currently there are problems maintaining an artificial biosphere, but the tech is much, much closer to maturation than genetic engineering on a planetary scale. The last biosphere project almost became self-sufficient. It failed because of side-effects stemming from the construction materials used in the enclosure itself.

Non-biosphere satellites can also store our collective knowledge. Again, this is an existing technology, and something we could actually do today if we were so inclined.

  • Another, is to look to robotics and AGI. If we accept that human civilisation will be toppled in the extinction level event, we have two possibilities there:

1. If humans are likely to survive as a race, then we can use weak AI (learning algorithms, narrow-focus neural nets etc (stuff we already have) in purpose-built facilities in natural hardpoints (mountain ranges, elevated deserts, etc) as custodians of our knowledge, waiting forhumans to relearn the basics of civilisation and discover them.

2. If humans are not likely to survive as a race, then a different approach is required. Strong AI (AGI, or self-aware general purpose artificial Intelligences) would be required. Nothing likethe scale of planetary re-engineering as in the genetic engineering approach would be required; just small pockets of isolated examples.

The rationale being, that if our meat-children are doomed to extinction, then we can pass the baton to new children we’ve made, to carry on civilisation in our stead. AGI isn’t here yet, but its something being actively worked on to a degree that I think is likely to see results this century. If this is the option we’re looking at, we have to hold on as a civilisation long enough to give birth to theirs.

  • A third, is of course genetic engineering. This too has possibilities, but given the scale of the problem, and how complex our own biology is to reverse-engineer, as well as how much pure ‘trial and error’ is inherent in the process, this potential solution requires we hold on to our civilisation and delay the oncoming disaster as long as possible.

A note: At this point, I don’t believe we can stop an extinction-level event from occuring. We’ve done too much damage to the biosphere, built too much on a foundation of finite resources and inherent corruption. Our goal now, as I see it, is to delay the collapse as long as possible, to give us more options for surviving. Ideally as a species and a civilisation. If not as a species, then as a civilisation.

Oh, and for those who don’t believe the scale of the problem, I’ll end this post with a link to a few images and contextualizations to contemplate.

Flag Post

Topic: Serious Discussion / Actual Minds in Videogames

Originally posted by petesahooligan:

Well, Ms. Fancypants, I’ve spent over 12 years in professional game development specializing in learning environments and simulations.

And my experience is in replicating the senses digitally. What’s your point?

Animals do not have feelings. They simply respond to stimulus. Knowledge is power.

Since humans are animals, we arrive at a consensus that humans do not possess feelings, and thus may be used and abused freely without doing any harm.

The nature of reality is probably not a productive direction for this conversation. My interpretation of your original premise was based in an observation that NPCs in simulation games did not do a good job simulating human behavior, particularly in response to environmental stimulus.

Current NPCs are scripted. My hypothesis is scripted, rigid behaviour is going to create a different result from freeform, dynamic behaviour. You are of course, free to disagree.

My response was rooted in the concept that a sensory-rich environment would be required if we expected the NPC to behave in a humanist way. Others shared that the processing load of creating this simulated environment may not represent valuable return on investment.

Since they have also repeatedly said they did not understand what I was asking, I believe this disbelief is now moot, with my third explanation of the idea, this one from a more technical standpoint. There may have to be a fourth if this one is not clear enough. Either way, I maintain that reproducing biological systems electronically is both possible and likely (considering it’s already being done, and all.)

Hence that comparison of a “perfectly believable” NPC in a sensory-deprived environment, (i.e., chess).

If the environment is non-embodied, I don’t see the point of trying to use it to explain your reasoning why a fully emobodied creature is not possible. Do you? Really?

I maintain my assertion — based on the dozen or so reasons stated earlier — that this is fundamentally impossible.

Then leave the thread. You refusing to stay on topic of the thread doesn’t get any more charming with age.

Applying morality to a simulated environment would presume that all of the actors, (NPCs and PCs… AI and human), share a moral context… that all the mixed up individuals relate to each other to some degree as equals.

That’s not true in our world. Why MUST it be true in a lower order reality?

As soon as you introduce a distinguishing characteristic, the morality becomes ambiguous. Essentially the same behavioral mechanism that informs racial bias would be operative. “Don’t worry about saving our henchman from the lava monster… he’s just an NPC.”

Which was … y’know, the whole point of the thread.

You could mimic human emotion pretty easily without the need for creating a comprehensive sensory-rich environment. If you focused on just the basic human emotions*, a skilled designer could probably come up with some fascinating dynamic NPCs.

Doesn’t work. They just turn neurotic. Sensory depravation / sensation without contextualisation are the leading theories why. Hence embodiment to provide a unifying context for input data, and a framework for their mind to build upon. The goal is to stop the nascent minds turning neurotic straight out of the gate, and allow them to develop in a more constructive direction. If they turn neurotic later on, it’ll be a response to specific, extended stimulus intended to turn them neurotic.

There we agree. However I was in particular interested in other people’s views. It’s why if we cut away all of Pete’s bluster, we find a viewpoint that says it doesn’t matter how much a non-human intelligence thinks, feels, believes, is sentient and sapient.

So sweet of you to offer this unwarranted insult, Vika. I had almost forgotten how much you dislike me.

I was not giving an insult. My like or dislike for you as a person is completely irrelevant. Rather, as you consistently refused to stay on topic for an entire post, I had to take a hatchet to them in order to get to the core of the part of your views that were actually on topic.

That’s what the result was. Your view is that a non-human intelligence is utterly beneath you in every conceivable way. A non-biological intelligence doubly so. That’s data I can work with. The rest, is just off-topic ramblings. ie, bluster.

As I stated previously, that mindset is one I am fully unsurprised to encounter. It’s one of the main ones I knew I’d encounter, in fact.

I’m not holding it against you in any way Pete. That’s where your mistaken belief its an insult comes from. Morality is subjective, and your morality is not my own, nor should it be.

I’m just after the statistics of how often this mindset (and the others) crop up in the general gaming populace. As well as if any unexpected mindsets crop up. As such, seeing someone express it, was not unexpected. It’s just bad luck that you’re the only one in this thread who has expressed that viewpoint thus far, so you get singled out as the example.

This thread was, and is, a prototype first attempt for a general survey. It failed in its main objective, but was illuminating in other ways. Helps me fine-tune the idea, before I do the whole ‘standing out in the snow, canvassing gamers off the street’ thing. After all, the confusion that permeated this thread is absolutely not something that can stand when I begin the actual survey. I’d rather not waste either my own time or the respondants’.

Flag Post

Topic: Serious Discussion / Hillary Clinton is paying people to "correct" people's opinions online

This concept is certainly as old as the Internet, and almost-certainly as old as language in general. Spin doctoring is a common political tool to alter the perception of truth of the voting public. Russia is currently undergoing a rolling program of endlessly generating conflicting conspiracy theories for practically every subject under the sun, in order to convince their populace that there is no truth to anything and everything is subjectiveto interpretation.

On the internet itself, I’m familiar with this tactic being used near-constantly on, the internet movie database. It’s the main portal used by the film industry to advertise and rate their products, and as such paid shills by the thousands are present, and in some cases certainly the entire cast & crew of a film have hit IMDB comments immediately, each gushing about how wonderful the film is and rating it 10 stars. This artificially inflates the rating of the film, and makes it seem appealing, with reviewers often personally attacking anyone who dares disagree and post a negative review; using peer pressure and intimidation to get it changed. Six months later the film is stuck at 2.3 stars, as the release day profits have been made and the paid shills don’t care any more.

I chose IMDB as the example I’d use because its exactly the same process, for the same reason. Like films, Hilary’s campaign has a ‘release date’, and so long as her ‘product’ is as hyped as much as possible, with the naysayers intimidated, harassed, and driven to silence, then all is good. This isn’t a long-term strategy and it wasn’t meant to be. Once ‘product release day’ (voting) is out of the way, the campaign will stop – there’s no need for it any more as the goal has been achieved. If everyone believes Hilary is a scheming, no good, lying sack of shit afterwards, it doesn’t matter as she’s in power then, and above all that.

So, since as with IMDB, the campaign is rolling with a set in stone cutoff day, a known day when all this expendature will stop, they’re free to pile as much money into it as they like. There’s no worry about legacy costs, about ongoing expenditure. After she gets in it doesn’t matter what anyone thinks of the product they’ve got. So, it’s a short-term massive disinformation campaign designed to make her look as good as possible and to do that she needs to have any and every dissenting view silenced, so the public is to the greatest extent possible, presented with a uniform positive spin of her as the most wonderful thing ever.

For that to happen, public gossiping about her bad points, or the good points of ‘competing products’ absolutely must be squashed flat with prejudice, by any means possible. The fear of god ideally put into those who would gossip dissenting views – that fear lasting just long enough to get past the election. After that, the pressure to be silent will evaporate.

What’s perhaps the most interesting part of this, to me anyway, is seeing just how far she and her campaign will go, in order to squish those who might talk about her to others in a negative light, or worse, not talk about her at all.

Flag Post

Topic: Serious Discussion / Actual Minds in Videogames

Originally posted by petesahooligan:

How was it determined that it was a prerequisite? The only goal that I understood was that you wanted the NPCs to be believable… the fact that they would need to feel was irrelevant.

It’s a prerequisite of embodiment that the embodied mind receives a continual stimulus of senses through that body. The embodiment functions to make the mind think it really is housed in that body. As such, damage notification senses are a prerequisite of embodiment, as are the seven major senses humans possess, if we’re designing a similar embodiment. Proprioception is obviously a must. Smell, taste could be done away with theoretically. Sight is also a must, as is binaural sound (at a minimum). Touch likewise is essential for the embodiment to seem real – though it doesn’t have to be a perfect analogue to our own sense of touch. Our sense of touch is 20 different senses all operating together to give an aggregate sense we don’t tend to differentiate from one another. A virtual embodiment doesn’t have to have those same 20 senses, could have less, or more, so long as its continuous.

A lot of the confusion is my fault I believe; I’m familiar with what embodiment must entail & why. Chalk this up to an episode of my forgetting most people wouldn’t have such knowledge.

But yes, the AI’s ability to feel pain is an absolute must for the embodiment to work.

Feelings are largely based on external stimulus, but in a virtual environment the stimulus itself is a simulation, and so is the “emotional response.

Prove our world isn’t a simulation. Go on. I’ll wait.

The NPC cannot develop a relationship with us because they have no awareness of us (as people here have repeatedly insisted).

Really? So in the games you play there is never a player-avatar? Never an embodiment in the game-world that the player controls? Must make for some very odd gaming experiences.

As much as you may want to anthropomorphize these programs, we reserve things like “feelings” to describe human behavior.

So other animals are incapable of having feelings. I never realised that. Huh, you learn something new every day.

A computer program can’t have feelings. It can only have responses to stimuli based on deliberate conscripted input. How can a computer program feel? Just because you’ve written a program that simulates an emotive character? Is that feeling?

A human mind cannot have feelings. It can only have responses to stimuli based on deliberate conscripted input (senses). How can a human mind feel? Just because they believe they have emotion? Is that feeling?

Hence why if you jab a fork into a human’s eye socket they’re not really feeling pain. They can’t feel pain. It’s all just simulated. Heck, you could stab someone over and over with a knife, toss a molotov cocktail into a school cafeteria, even slit the throat of the random old lady you meet on the street. You’re not harming anyone. Human feelings are only responses to prograqmming after all.

That’s your same logic.

Originally posted by ImplosionOfDoom:

In which case, the big moral difference between maiming/killing the AI character V.S a flesh and blood person who ’doesn’t have the full programing suite’ is mortality.

Actually we DO have ‘the full programming suite’. That’s just our conditioned responses, our reflexes, our sub-conscious systems. All the parts of the brain outside of our control, conscious or otherwise. And as Pete said, if you accept that people with this suite are incapable of thinking for themselves, incapable of feeling, then you have proven with 100% accuracy that none of us are capable of either.

Doies morality even enter into the equation when you have ‘proof’ no human is capable of self-awareness, pain, loss, or any form of discomfort?

You know for sure that the AI can be restored to it’s original state, perhaps just being ‘reincarnated on the next save file’ or ‘revived through save scumming’ or ‘revived by server reset’ depending on the mechanics of the game.

In theory, yea. However, there are absolutely no guarantees that the mind will develop in exactly the same way each time – logically it would wander down different pathways each time it is booted from a save point. This is because there’s a quantum element in an active mind – choices are weighted based on memory, experience, and a slight ‘spontaneous’ component known as random noise. So, each time you restore a mind to a previous save point, there’s absolutely no guarantee it would unfold exactly the same way it did last time.

Add in random events in the environment that alter the experiences the AI receives, and differences in how the player (and indeed other AIs) interact with them, and it won’t be long before the ‘saved’ mind has wandered in a different direction to the one the original took.

Oh and it should also be noted that when somebody dies in real life it usually leaves behind a trail of grief stricken family and friends…. Even if ‘the person was like a meaty version of an emotionless robot’, somebody will still probably miss them.

Which just adds another layer of experience to the encapsulated world, doesn’t it. You now have friends/family/loved ones of the dead person reacting to the new data in a likely pain-filled way. Forget any concept of scripting, this kind of reaction is what pulls the tran completely off the tracks, and dynamically reshapes the gameworld in real-time, it shifting and reshifting based on events that occur within it.

You could never hope to script to cover EVERY possible player decision or wished for interaction. No matter how many hundreds of millions of game dev hours you poured into the scripting, you’re going to miss an infinite number of interaction possibilities. With dynaminc, living, breathing minds in the gameworld, you don’t need scripting almost at all, yet the result is far more dynamic and real than scripting could ever be.

Originally posted by Tulrog:

Imagine a game that runs on a server which nobody (not even the admins) has access to. So no server restarts, savepoints and the like. Also ignore the likelihood and usefulness of such a game for a moment.

Now imagine an AI guy who lives somewhere far away from settlements. This guy has a Deadric weapon. Or a Power Armor if you prefer Fallout.

Would you kill this guy to get his item?

Yup, I have to agree with Tulrog here. This is where things get REALLY interesting. This kind of possibility and how the player would react to it is at the heart of my original OP.

Personally I am still puzzled about what sort of AI Vika is thinking about. And this is pretty much the most important question to decide how I would behave.

If you don’t mind me diving off int othe deep end of terminology for a moment, I can answer that Tulrog. I was TRYING to keep the discussion low-key and relatively universally understandable. It’s obvious that I’ve failed. So what I was envisaging was:

A fully integrated virtually embodied artificial general intelligence (AGI) existing as a real person from their point of view, encapsulated within a lower order universe than our own, a fully functional and complete reality from the point of view ofthe encapsulated AGI.

As their reality is encapsulated, they have no way out of their reality, nor any way of knowing their reality is in fact a simulation. There are a pletora of tricks you can use to keep a sentient, self-aware mind ignorant of the gaps in reality even if they study them, and these would be in full force here. The AGI is from their own perspective a fully functional, sapient, sentient, self-aware individual physically located inside a material body.

This is absolutely key.

From the developer’s perspective, and from a technical standpoint, they have no physical form, and their sentience and self-awareness is arguably an illusion (depending on implementation, taking certain shortcuts make sense from a computational overhead perspective especially if you’re going to be running several dozen such minds on a single home gaming computer).

From the gamer’s perspective, they behave like real people – they seem to think, feel, react, interact like real people with dynamic, changing ideas and dynamic consequences. That’s kinda the whole point of this.

but again, from the perspective of the NPCs inside the game-verse, they ARE real people. They have memories/written materials/physical evidence of the existence of their whole civilisation, of the existence of their race, of the existence of the world around them, and they have their own sensory inputs as direct confirmating evidence that the world around them is real. They can see it, smell it, touch it, taste it, etc. They can feel their own bodies through priproception. they know they are real – they think therefore they are, etc.

Tough question. Probably not. The main question is when does a simulation stop being a simulation and starts being something real?

Can’t it be both? That’s the central point I’m driving at here.

And what makes me even more think is that I am not sure if you need a biological body to have a feeling of consciousness.

I’m certain you don’t need a biological body at all. Proving you don’t is still a work in progress (Blue Brain Project), I’m a firm subscriber to the mind-set of computational biology. You can replicate any biological process entirely non-biologically if you understand the rules that govern it, and can recreate them mathematically. Neural computation is no different.

Heck, my own field of study demands this mindset, and we’ve gotten repeatable, consistent demonstratable results removing the biological from the system and simulating it synthetically with accuracy. The smart prosthetic limbs I work with, when they’re working at their absolute best, take neural data from the human nervous system, and pass it to robotic equipment, to control that robotic equipment in the same manner as the brain intended the original physical equipm,ent to be controlled. Then and this is the important bit that same robotic equipment can take data from its utterly inorganic sensors, pass that data badck into the human nervous system, and completely fool the brain into believing it is coming from the original biological mechanoreceptor systems.

In short, the biology does not matter. Whatever data biology can deal with, non-biological systems can completely replicate.

But to me there is no reason to treat a digital person without a “biological body” any different from a “real” person.

There we agree. However I was in particular interested in other people’s views. It’s why if we cut away all of Pete’s bluster, we find a viewpoint that says it doesn’t matter how much a non-human intelligence thinks, feels, believes, is sentient and sapient. If they’re non human, and/or non biological it’s perfectly okay to torment and abuse them however we like with no consequences, either external, or to their self’s moral code.

I knew such a subset of viewpoints was out there when I made the OP. Other studies have found them before. What’s interesting is how that maps onto the gaming population. I chose a gaming-based, 100% encapsulated world for that reason, after all.

Flag Post

Topic: Serious Discussion / Actual Minds in Videogames

Because they aren’t “actual” minds. They are merely simulations of cognitive beings that are programmed to respond to stimuli according to specific characteristics.

Thank you for your response, I cannot quibble with it, because those are your genuine feelings, and are what I asked for. However, I am curious if you wouldn’t mind about one thing:

1. They do not feel

How can you state that with certainty, when it is a prerequisite for such beings that they must feel?

Flag Post

Topic: Serious Discussion / Germany to arrest a comedian for satire

Originally posted by Jantonaitis:

Why wouldn’t satire automatically fall under freedom of speech?

It’s just a case of an old, old, ooold world country having had laws that were passed throughout an extremely long period of time, covering a great many historical ages and situations.

It is on the face of it, a case of exercising free speech, but you have to take into consideration all the old laws the host country may still have on the books. It’s not unheard of in European countries for a law to date back a thousand years, when it was a completely different world.

If anything, I’d expect many Asian and African countries to be even worse in terms of baggage from ancient laws, still on the books.

It’s only through cases such as this that the old laws are dredged up to be applied, and we then have these sorts of discussions about whether the law truly is valid any more, or if it’s time to retire it. Which is precisely what is going on in Germany at the moment.

Flag Post

Topic: Serious Discussion / Actual Minds in Videogames

Originally posted by petesahooligan:

I would kill every motherfucker I encountered. Why would I not?

Thank you for (finally) addressing the core question I posed. Could you please elucidate a little on why you think having intelligent minds inside a gameworld would cause you to kill every mind you encountered? I’m as interested in the rationale as I am in the individual base answer.


Implosion, I’d like to revisit two of the things you said in your earlier post, which has been mulling round in my brain since I read it:

Originally posted by ImplosionOfDoom:

Still, aside from the player intentionally acting ‘insane’ by NPC standards, video games have lots of mechanics intended to make the experience less frustrating for players (Respawn, among other things…)

If you’re making a game (that is, a thing of pure entertainment) and trying to make the NPCs as realistic as possible with all sorts of atmospheric actions (Eating, drinking, sleeping, etc) and things like NPC permadeath (NPCs must wait until the next save file to reincarnate) you’re still going to have to include some ‘anti-frustration features’ for the player for the game to remain ‘fun’….

Especially if the NPCs start noticing stuff like players mysteriously coming back from the dead, not feeling pain, not needing to eat/drink/sleep/use the toilet, not aging, etc. (If you saw somebody doing any of that in real life, especially with no explanation you’d probably freak out or question your sanity. If you and multiple other people saw somebody doing that…. Well expect mob mentality to do terrible things.)

Essentially the NPCs would be noticing something inconsistent within the game’s definition of ‘reality’… The Player-characters…

There are to my mind, three possible ways round these potential problems:

1). You translate onto the player-character the actions expected of them when the PC is idle. Eg, a small needs-based replacement for the standard ‘idle’ annimation that’s a bit more than an animation, has them peeing in a nearby bush, eating from supplies, catching a quick nap et al, based on how long it is since they’ve done any of the same, and how ‘urgent’ it is. That would be enough to preserve suspension of disbelief for the NPCs. You could additionally penalise the PC for going too long without sleeping by a warning followed by a second warning, followed by skill degradation from tiredness, for example.

2). The game retroactively edits the NPCs memories. If they start thinking it’s been a while since the PC went on a bathroom break, the game engine supplies a time they did, into the NPC’s memory as a priority instruction. It wouldn’t have to be exactly specific; just supply the general details, and let the NPC mind’s inner workings fill in the blanks. Using this methoid, it wouldn’t matter what the reality was, since all the NPCs remember is the fabrication that these things have happened as normal.

3). The gameworld’s reality doesn’t have to be based on the same rules that govern our reality. If the gameworld has a mechanic that permits respawning, or an in-universe explanation justifying why respawning happens in that particular area and the NPCs have a (to them) lifetime of memories convincing them that it is normal, then the ‘problem’ is a complete non-issue to them, regardless of how strange fridge logic might make it seem to us.


Flag Post

Topic: Serious Discussion / Actual Minds in Videogames

Originally posted by ImplosionOfDoom:

Any who, a lot of that was kind of an example of why a ‘human AI’ wouldn’t be something you’d want to include in every game. And why you’d really have to tailor the game around the AI mechanic to make it work.

I don’t believe I ever said it would be used in every game. There are times when such is appropriate, and times when it is not. Practically when you do use it, a lot of the time you’d have AIs running on autopilot with a paired down mind. Humans in our world do this a LOT. When doing a task you commonly do, or travelling a route you commonly travel, your mind goes on autopilot. When you come out of it, you can’t remember what you were doing for the last 5 or 10 minutes, as your brain was running on autopilot. Most people tend to encounter it a lot when driving – they find when they do think about it, that they cannot remember what they were doing for the last few minutes as their body just drove around without really thinking. But it crops up in all spheres of life.

Thus, for a simulation it’s equally natural that we could get away with pairing the mind back when they’re doing a routine task, and only revving it back up again when their senses detecty something major in the environment has changed, and any ‘musings’ they had whilst doing the task could be computated in one hit then supplied to them as a memory.

Then we’d be expecting the machine to keep track of player character behavior to build / program their ‘NPC substitute’ to act in a similar fashion. If you’re not sure if a character is still being controlled by a player, or has been taken over by the NPC and the program is ‘slightly wrong" about the nature of the player the character previously belonged to, you might be wondering why your IRL friend is acting like a jerk all the sudden… Or is just otherwise acting ’out of character’.

I would say that developing a new mind based on observations of the actions of occasional extrustons of an existing mind from a higher order of reality in order to copy the behaviour of that exact mind, is many, many orders of magnitude more difficult than what we’re discussing here.

Originally posted by petesahooligan:

I’m not concerned with entertainment whatsoever. I’m solely interested in immersion and believability. The goal, as I see it, is to understand the challenges with trying to design a simulated human in a simulated environment that can “pass.

It doesn’t have to be a human. I’m not sure where you got that little nugget from. It can be any species native to that gameworld. However, as gamer has been saying, the nembers of that species will obviously believe their species is real, and their world is real – reality is what their senses tell them is real, just as it is for us.

In all cases though, they would have (or at least have the illusion of having) independent minds, memories, personalities, beliefs. They would have a place in their society/culture that would both make sense to them and those around them.

Their interactions with the player and with the other NPCs would be based on their initial, starting memories, shaped by the personality they were initially given and by their embodiment. After that initial creation moment, their continuing personality, memories and beliefs would be shaped by the experiences they got from the player, other NPCs, and their interactions with the environment itself.

Human is one possibility, but equally valid is humanoid, bipedal, quadrupedal, and other.

The important point however, is that the gameworld is the only world they know. They are encapsulated entirely within it, and (barring a LOT of re-coding by a particularly strange end-user that is waaay beyond the scope of this thread) will NEVER experience anything outside of it. That world is their home, their reality, their universe. They are sapient individuals (for all intents and purposes) who live in that world.

THE GOAL is to discuss how the presence of such entities would change your gaming experience. You’re welcome to try and discuss the difficulties of creating such if you like, but I would appreciate it if you could at least pay lip-service to the subject of the thread.

Flag Post

Topic: Serious Discussion / Gun CONTROL issues

Flip-flops are good, wholesome and wonderful things. After all, all our computers use Type-D Flip Flops as the core of their processors.

Possibly a wee bit too nerdy for ya? lol

Flag Post

Topic: Serious Discussion / More evidence that Autism Speaks is a hate group.

Originally posted by onlineidiot1994:

I’d rather see their budget go towards research than to family services honestly.

Likewise. There’s a misconception about research that sometimes crops up:

Originally posted by onemorepenny:

One thing people don’t understand about research is that no amount of funding can guarantee results.

This isn’t strictly speaking, accurate. Quite often a research failure is far more valuable than a research success. This is because what the failure teaches us about what the mechanisms of the condition do not involve is usually far more illuminating than a success that shows us one viable treatment path.

In other words, as scientists, we learn far more from our failures than our successes.That actually brings the likelihood of eventual success much closer as we have a much clearer picture of the outline of the problem we’re trying to solve.

Flag Post

Topic: Serious Discussion / Actual Minds in Videogames

Originally posted by onlineidiot1994:

If they made a game with infinite replayability and you could have all the playtime you could ever want, then how would the video game industry be able to make money off you buying their next game, the DLC, etc, etc, etc.

By continuing a model they’ve already started – selling DLC packs to existing gameworlds. Expanding them ever onwards, and releasing new ones with different and interesting mechanics. Or continuations of the old that increase the fidelity of the world but support a full savefuile import from the old one. Etc.

There are a myriad of ways to make money off such a model, and all benefit from the continuing high-investment of the players.

Meanwhile, more simple or casual games will continue to sell much as they always did.

Flag Post

Topic: Serious Discussion / Gun CONTROL issues

Originally posted by thepunisher52:
You are right, no one needs a 30 round full auto for personal safety, that is, unless you are in GTA universe.

Yea, a friend of the family who still lives in Kentucky tends to go shopping with at least six loaded pistols on his person. One sticking out of each boot, two holstered to his pants, two on shoulder holsters, and I always wonder wth for? I’ve brought this up with him. He lives in the middle of a small city for christsakes, why does he need all the firepower to go pick up groceries?!?

He was the kind meaning if not entirely in that instance, smart person who purchased my glock for me. He did so without even telling me he was buying it on my behalf. Then shipped it internationally in the middle of a larger parcel. Gee thanks. If that had been discovered…

I’ve since taken pains to render it non-functional, as I cannot stand firearms. Keeping it in a plastic container filled to the brim with saline water, and stuck up in the loft (attic) for ten years has rendered it rather rusty, probably rather smelly, and hopefully non-usable, but it’s another example perhaps of the casualness with which some gun owners treat firearms.

Flag Post

Topic: Serious Discussion / Gun CONTROL issues

See, it would be fine if anything more powerful than a personal sidearm was only available at a gun range (and not to leave the range), but for whatever strange reason that’s completely unacceptable to the US gun owning population at large.

I can understand the use of personal defense firearms particularly for physically weaker individuals who need an equaliser. I don’t understand the desire to own 50 or 60 firearms.

Flag Post

Topic: Serious Discussion / Gun CONTROL issues

The tank gun can be fired. However the shells are extremely hard to get and regulated out the wazoo and beyond. With that size barrel, a home-made shell would really not be a good idea. One mistake in making the shell, and you’ve blown up your own tank.

High altitude bombers aren’t exactly precison fighting machines. For those to be employed, you’re looking less at tyrannical government and more at complete genocide.

That’s a good point about the miniguns. I remember there was some hoo-hah a few years back that had the ‘minigun club of America’‘s knickers in a tizzy but I cannto remember what the issue was. A quick google search this afternoon however, did confirm that some miniguns are still being bought and sold. So if there is a ban, it’s not complete.

I think the argument about fighting government tyranny is more aimed at the police forces and civil administrations rather than flat-out taking on the standing military forces. Nowhere in their rhetoric does it say they plan to win against the federal military, just that they’ll give it all they’ve got.

I agree it’s daft, since here in the UK, we do the same thing: fight against government tyrannnty, but we do it with votes, and strikes and protests and sygnatory sheets. Guns are almost completely banned nationwise (some farmers are allowed shotguns, and most big cities have 3-4 armed officers in a strike team) yet we do great at limiting or reversing stupid government decisions without them.

I think they’re a crutch more than anything; a way of making disenfranchised individuals feel important, or powerful – an ego stroke, basically.

Flag Post

Topic: Serious Discussion / Gun CONTROL issues

Originally posted by thepunisher52:

You know what is the most redicilous argument given by gun nuts? They want guns to fight govrn. tyrrany.
Bitch please, your guns can’t do jack shit against
[image of a tank]

It is however legal for ordinary US citizens to own their own tanks.

Video of a ride in a privately owned tank

and this
[image of a modern fighter jet]

Anti-aircraft missile launchers may be constitutionally protected arms.

hell, you have to be very lucky to cause some damage even against this
[image of a humvee]

This mini gun would do a lot of damge to a humvee.

The weapons are legal, what puts them out of the range of the average citizen is purely cost.

Flag Post

Topic: Serious Discussion / Gun CONTROL issues

Originally posted by jhco50:

The title’s misleading. It is concerned solely with the case of Dylann Roof, and with President Obama’s legislation to date. It’s mostly attacking the legislation based on that one single case study saying the legislation would have done nothing to help in that one isolated case and thus is completely useless.

Thank you for the link, but it is of limited use at best in holding up a position.

What does this have to do with the topic exactly? Concern over the quality of fast food and the amount of sugar in fizzy drinks doesn’t exactly strike me as in any way relevant to the topic of gun control.

Flag Post

Topic: Serious Discussion / More evidence that Autism Speaks is a hate group.

Originally posted by Jantonaitis:

Why would I? I’m not autistic [really!]

Experience with your brother, mebee?

Flag Post

Topic: Serious Discussion / More evidence that Autism Speaks is a hate group.

Originally posted by petesahooligan:

I don’t know what family services is so I can’t say that 4% is enough… maybe that’s all they need to put into that category. Maybe they’re meeting their expectations with 4%… I cannot say.

According to their own site it is mostly a grant-provision service, which gives grant money to services trying to educate / socialise / improve quality of life of autistic individuals and their families.

Additionally, they provide grants to families caring for an autistic person who have suffered a natural disaster, allowing them to get back on their feet more quickly, and out of the perpetual state of confusion following a disaster. Most likely the chaos following a disaster would be hell for the mind of many autistics to process. Janton can probably best answer that.

Flag Post

Topic: Serious Discussion / Gun CONTROL issues

One potential partial solution would be to serial number every legally made gun using quantum dots embedded in the metal when it is cast.

A few hundred quantum dots per gun would be more than sufficient, as it only takes one to be identified to pull up the gun.

Combine that with a state or national register of firearm ownership, matching serial number to owner (note: The db doesn’t have to say what weapon the serial number belongs to, just the weapon, so the NRA and its supporters couldn’t claim it was identifying who owns which types of firearm, it solely identifies how many guns they own, and their serial numbers)

Then, when an incident occurs, or upon police suspicion something’ not right, the weapons a person owns can be checked forthe presence of quantum dots – and the serial numbers on them checked against the database.

If the weapon’s not listed as belonging to that owner, we have evidence of under-the-table gun trading, and BOTH owners are now in trouble:

  • The one who currently owns the gun, for not having purchased through legal channels
  • The one who is registered as being the actual owner of the gun, for selling it illicitly (unless of course they reported it stolen, and the other owner doesn’t say they bought it under questioning).

It would catch home-made firearms as well. Home made = no quantum dots = demonstratably illegal.

You’d have to grandfather-in older guns, and they would remain a problem for quite a few years. But with all newer weapons being registered, and a policy of applying quantum dots to older weapons for an insurance discount… Eventually you’d get damn near all the legal guns – and then the illegal ones stick out by a mile.