Recent posts by matt on Kongregate

Flag Post

Topic: Serious Discussion / Logic

Originally posted by tenco1:
Originally posted by slogsdon:

has anyone in here actually formally studied logic?

i’m guessing not

Wait, you can study logic?

You can study logic in 3 or so ways:

1) from the philosophy side, with questions like “what are correct ways of inferring information”, and “how do proof, truth, and reality all relate to each other”;

2) from the math side, with questions like “what system should we use to talk about and interpret this particualr mathematical structure”, “what are the properties of this system”, and “what can we learn about a structure once we know it can be interpreted in a certain system”;

3) from the algorithm side, with questions like “how can we efficiently implement the rules for a system on a computer” and “how difficult is it to solve problems like ____ for structures interpretable in systems like _____”

I’m sure there’s more, but this is probably enough to vaguely cover the work of most people who would describe themselves as a logician. All of these people would have learned formal logic, and at least for 2 and 3 routinely work with it.

I’m a card carrying member of category 2 (and occasionally sit around listening to category 3 people because those are also often math people). So I’ll ramble from that perspective.

Doing it from the math side means we’re already starting with assumed logic and counting ability on everyone’s part, and then you go and define formal systems called logics. The assumed ambient logic and number ideas isn’t like “you need to go learn that in order to learn this”, it’s something that we have to assume is just a general reality in order to even make sense doing math at all, sort of like how you have to assume English conveys any of your intended meaning to think that having a conversation has a point.

The general theme with what makes something “a logic” is that it gives you a formal language and it tells you what kinds of deduction are okay.

Crack open a math logic book and you’ll see a definition of “proof” somewhere in there. Usually it’s something like “proof in the formal system F” is defined as a finite list of formulas from F such that each formula is either an axiom or is derived from the earlier formulas using the deduction rules. So a proof is usually a thing that looks like

1) By assumption, blah blah blah.
2) By assumption, blah blah.
3) By 1 and 2, and some inference rule, blah blah blah blah blah.
4) By 3 and some inference rule, blah blah blah.

You can go find a definition of “true” in a math logic book too, but it would probably seem weird unless you learn enough stuff before.

Being “true” and being “provable” aren’t always related, though you’d want them to be for your logic to be useful. Showing this for the system that’s commonly used as sort of the “base logic” for most math reasoning isn’t totally obvious. For things like the theory of arithmetic, there’s weird stuff like we know some sentences are “true” but are not “provable”.

Anyway, a potentially interesting point to take away here is that this is all somewhat arbitrary because the definitions are up to people to make. So being “proved” and being “true” depend on the rules you decided to use. But we don’t just pick them totally randomly. We generally try to pick rules that match with that “obvious reasoning”, but there’s no absolute justification for that, and that’s where philosophy comes in.

When people talk about “logic” in everyday use, they probably mean that general system everyone is assumed to understand, knowing what “and” “or” “implies” “for all” “at least one” “not” etc. mean. But, when they say that they used logic or proved something or that something is true, it’s often more accurate to say this relative to their system, with whatever beliefs and assumptions they were using. If it’s transparent enough what they did and it overlaps with your system enough, you’d agree with them. Otherwise, you might not. This is sometimes why you get rational people disagreeing on moral issues for example. Though eventually they should figure out that they’re disagreeing about assumptions or acceptable methods of inference. Again, the question of whether one system is better than another is a topic for philosophy.

Flag Post

Topic: Serious Discussion / Eye Color Puzzle

Throwin' in my explanation. If you get it already, or don't feel like reading much, skip to the "summary of that b=3 case" part, second to last section (way down past the pictures).

Solution below

It definitely helps to consider smaller cases first, with fewer people on the island. Just to get some notation going, I'll say B = 3 as an abbreviation for "the number of blue-eyed people is 3", and G = 4 for "the number of green-eyed people is 4, etc. We can use G for the number of non-blue-eyed people if you don't want to assume everyone is either blue or green eyed, it doesn't actually matter. I'm also going to use ≤ to talk about these numbers.

People have already gone over this, but let's quickly recap the B = 1 and B = 2 cases:

B = 1: if there were only 1 blue-eyed person.

Call him Bob. Before the announcement, Bob only sees green-eyed people (if any exist), and doesn't know his own eye color. So, as far as Bob knows, 0≤B≤1, since there's either 0 or 1 blue-eyed people, (either no one, or him). The announcement that there's at least one blue-eyed person tells him that 1≤B. So, now Bob knows that 1≤B≤1, that is, that B=1. Since he sees no blue-eyed people, he deduces that he is the 1, and he leaves.

B = 2: if there were 2 blue-eyed people.

Call them Bob and Barry. Before the announcement, Bob sees 1 blue-eyed person (Barry) and some number of green-eyed people. So, as far as Bob knows, 1≤B≤2, since there's at least Barry, and maybe himself. Barry also only knows 1≤B≤2 for similar reasons. Neither of them would leave before the announcement, since each has no reason to think that it isn't B=1, that he has green eyes, and the other dude just hasn't figured it out.

Here's one of the key ideas in this puzzle though. What could Bob say about what Barry knows? Sure, we know that Barry knows 1≤B≤2, but that's because we know that Barry sees that Bob has blue eyes. Bob doesn't know that he himself has blue eyes. So it's possible, as far as Bob can tell, that Barry sees only green-eyed people. That means the best Bob could say about what Barry knows is 0≤B≤2. The 0 possibility is because Bob thinks its possible that only Barry has blue eyes and hence that Barry sees 0 blue-eyed people; the 2 possibility is because Bob thinks its possible that both he and Barry have blue eyes and hence that Barry sees 1 blue-eyed person and acknowledges that he could be a second.

So, to recap that last paragraph: Bob only knows that Barry knows 0≤B≤2. Similarly, Barry only knows that Bob knows 0≤B≤2. We know better, because we can see Bob's eye color, but Bob can't, so his information about Barry is worse.

When the announcement is made, now Bob knows that Barry knows 1≤B≤2 (and vice-verse). This is the change that the announcement makes. Even though we know everyone knows 1≤B from the start, Bob didn't know that Barry knew that 1≤B; now he does.

This doesn't lead to anyone leaving on the first night, because it's still possible, as far as Bob can tell still, that B=1 and that Barry is the only one with blue eyes. Barry thinks similarly. However, after the first night, when no one leaves, Bob knows that Barry must have seen another person with blue eyes. If Barry hadn't seen someone else with blue eyes, he'd have deduced that B=1 and that he was the one blue-eyed person. So, his remaining on the island informs Bob that 2≤B, and hence that B=2. But, then since Bob only counts one other blue-eyed person, he knows he is the second and deduces his eye color and leaves. Similarly for Barry.

B=3: he knows he knows.

From the length of the previous section, you know this sucks. We'll name the three guys Barry, Ben, and Bob. We'll end up talking about what Barry knows about what Ben knows about what Bob knows (alphabetical yeah!).

So let's jump into it and hey sweet here's a picture.

The dots are people with their eye color indicated (this is the 3 blue eye, 3 green eye case specifically), and the lines toward the circled dot (Barry) indicate what he sees. Focus on the lines.

So, clearly, Barry knows 2≤B≤3, because he sees 2 blue-eyed people, and doesn't know his own, so maybe he's +1 for a third.

Now, what does Barry know about what Ben knows? Oh sweet more pictures

The lines in this picture represent what Barry (dark circled) knows about what Ben (lighter circled) knows. The new thing here is the grey squiggly line from Barry to Ben. This represents how Barry's lack of knowledge about his own eye color affects his knowledge of what Ben knows. Since Barry doesn't know his own eye color, this makes his knowledge of the numbers for Ben weaker by 1. Reread that last sentence.

Barry doesn't know if Ben sees 1 or 2 people with blue eyes, so Barry doesn't know whether Ben thinks the bounds are 1≤B≤2 or 2≤B≤3. So, the best Barry can say about Ben's knowledge is 1≤B≤3. Notice that the lower number got pushed down one compared to what Barry himself knows; again this is because Barry doesn't know his own eye color, and this lack of information has an effect when we do this perspective thing.

Now finally, what does Barry know about what Ben knows about what Bob knows? Of course there's a picture.

Barry is the darkest circle, Ben the middle grey, and Bob the lightest. The lines represent what Barry knows about what Ben knows about what Bob sees. The point is that since Ben doesn't know his own eye color, this gets reflected in a knock-down of the lower bound on the B by one more number again. You can go think it through, but remember that we're thinking about this from (the first guy) Barry's perspective.

So, all Barry knows about what Ben knows about Bob knows is 0≤B≤3. The 0 case is the (impossible from our perspective) possibility (from Barry's perspective) that Barry has green eyes, Ben thinks it's possible that Ben has green eyes, and Ben thinks that Bob thinks it's possible that Bob has green eyes.

Great, now, none of them leave yet because as far as each blue-eyed guy is concerned, there's 2 blue-eyed guys who just haven't figured it out yet.

Now, the announcement is made, and the only change is to this nested thought: now, Barry knows that Ben knows that Bob knows 1≤B≤3; the low number got bumped up 1.

Still, no one leaves on the first night. But, this has an effect on the second day. When Bob doesn't leave, it indicates to everyone that Bob sees someone with blue eyes (otherwise, he'd have deduced the 1 was him and have left). It's no longer possible in anyone's any-kind-of-nested perspective that Bob sees 0 blue-eyed people. So, in particular, this tells Barry that Ben knows that Bob sees 1 blue-eyed person.

This lets Barry know that Ben knows that 2≤B (because Ben sees Bob and knows Bob sees 1 other). If you check back up to an earlier paragraph, you'll see that this is an update from 1≤B≤3 to 2≤B≤3. Notice it bumped up the lower bound by 1, this time it updated Barry's knowledge about Ben. Also notice, now Barry thinks that Ben knows exactly what Barry knew all along: 2≤B≤3.

Still still, no one leaves on the second night. But this has an effect. When Ben doesn't leave, it indicates to everyone that Ben sees 2 people with blue eyes (otherwise, if he only saw 1, but knew there was 2, he'd have deduced that he was the 2nd and left; also notice that this deduction wasn't possible for Barry until he knew that Ben knew 2≤B). In particular, Barry now knows that Ben sees 2 other blue-eyed people, and so Barry deduces that 3≤B≤3 hence B=3. Only seeing 2 blue-eyed people, he deduces that he is the third and leaves. Similarly for the other blue-eyed guys.

Summary of that B=3 case.

The point was that if you're a blue-eyed guy, you know 2≤B≤3 because you see 2 blue-eyed guys and you might be a third, but each time you go down a level (I N C E P T I O N) in the "he knows that he knows that he knows..." chain, the lack of knowledge about own eye color manifests as a drop in the lower bound. One step makes that information 1≤B≤3. Another step makes it 0≤B≤3. As far as any of you know, you have green eyes, so as far as you know about what he knows, he might see you have green eyes and think he has green eyes, etc.

It's that lowest level 0≤B that gets updated when the announcement is made. Now, each day, when someone doesn't leave after this, it informs everyone that he can find the lower-bound-number by counting other people. That gives the guy up a step some more information (from your perspective), because not leaving when you know 2≤B is like announcing "yeah, 2 other people". It keeps propagating up a level when people don't leave, adding 1 to those lower bounds, until it finally reaches you up at the top, making your knowledge 3≤B≤3.


If you understand the B=3 case you probably believe it works up to 100, and realize that the green-eyed people don't really affect anything. The result can be proved inductively, but the hard part is to understand that he-said-she-said aspect and how the knowledge levels get affected when you move one lower, or when you advance one day. The inductive proof is essentially to check some base cases like we did for B=1,2,3, and then argue that if B=n, and if you were a blue-eyed person, you'd see the other n-1 blue-eyed people and know they'd figure it out and leave in (n - 1) days if you had green eyes. They don't leave, so there's n of them and you've got to be one, so you leave on the n-th day. You don't actually need to check the B=3 case for the proof because you already have earlier cases, but it's funlol.
Flag Post

Topic: Serious Discussion / Eye Color Puzzle

I’ve PMed you to alleviate your concerns :S (It takes a while to type!)

I’ll do my best to explain it here later tonight, but you can all do whatever you want in the meanwhile. I guess just try to mark off somehow if you’re giving any superbig hints, in case anyone who doesn’t want to see it has read this far, though I doubt that.

Flag Post

Topic: Serious Discussion / Eye Color Puzzle

Originally posted by Redem:

That there is at least one person with blue eyes would seem to be already evident from simple observations of the other people.

Originally posted by Koshej613:

So, MATT, something’s wrong here, don’t you think?

It’s supposed to be like that. The point of the puzzle is pretty much to highlight this weirdness.

Every man on the island already knows before the announcement that the number of blue-eyed people is at least 99, since he can see 99 other blue-eyed people. Every green-eyed person can see 100 blue-eyed people. But (tiny not-really spoilers) yes, the announcement changes something in a non-obvious way (/spoilers).

Originally posted by Ketsy:

Consider if there was just four people on the island. Two with blue, two with green.

This helps.

Flag Post

Topic: Serious Discussion / Eye Color Puzzle

they look at their reflection in the water

This is not the intended answer and counts as funny business. The setting of the island is a somewhat unfortunate standard for this reason.

Flag Post

Topic: Serious Discussion / Eye Color Puzzle

This puzzle has shown up a bunch of places (including in a thread in the off-topic forum here circa 2007, probably more) but I was thinking about it again lately and figured I'd remind people / talk about it here, for funzies.
The solution is here if you want to read it

The Set-up
200 men are placed magically on an island, and our calendar is reset to day 1. We know 100 of them have blue eyes and 100 have green eyes (no mix-and-match going on here, everyone has a pair of same-colored eyes). We don't interact in any way with them, we're just external magical observers. The men can all see each other and accurately know eye colors, but they cannot see their own eye colors. So, for example, any blue-eyed man will see 100 green-eyed men and all 99 other blue-eyed men, but does not know his own eye color (and he doesn't know there's 100 of each in advance, so he can't tell just from this information). The men do not communicate in any way with each other (so they can't learn their own eye color from someone else telling them, either). The men are perfect deduction machines, in the sense that if anything is knowable from the information they have access to, they know it right away.

The Rule
If any man on the island discovers his own eye color, he must leave at 11:59:59 PM on the day he discovers. So, if Bob knows (not guesses, knows) his eye color on Tuesday, he leaves Tuesday night. He doesn't tell anyone his eye color, they never communicate, he just goes. Everyone else still on the island knows that he left. The men are all aware of this rule.

The Puzzle
Assure yourself that, even if years were to pass, none of the men would learn their own eye color in this set-up. Now, on some day, say day # D, an announcement is magically made, informing all of the men on the island that there is at least one man with blue eyes on the island. All of the men hear and understand and know this once the announcement is made. The question is, does the announcement lead to anybody leaving the island, and if so, who and when? There's no funny business; things are as described above. This isn't a riddle where the answer involves some tricky interpretation of the statement.

I'll let you guys talk around a bit before saying anything else. As a general tip for annoying problems involving numbers: feel free to mess around with the numbers and think through other similar examples to see if it tells you anything. The answer is easily available on the internet, but you're cooler if you don't look until you've at least thought about it for a while.

Edit: Solution
Is here if you'd like to read it.
Flag Post

Topic: Serious Discussion / Do we have free will?

Alright so, your argument seems to be to take ~these as assumptions:

1) What a person does is controlled by his emotions at that time.
2) Some kind of assumption about transitivity of control. (So that we can say in light of 1: if you want to control your actions, you need to control your emotions.)
3) A person’s emotions at a particular time are a result of their history.

Then do some kind of argument that 3 lets us push back any possible point of control over emotion to before they could be said to have control, say birth or earlier. So, a person never has control of their emotions. Then, by 2&1, they have no control over their actions.

So, yeah, this looks like Strawson’s argument (as it appears in the wiki link), with “the way you are” swapped out for “your emotions” and his 3 given a bit more justification.

In the interest of generating more discussion, I guess an obvious question to ask is:

How different is assuming that sort of totality of control in (1) from assuming that you don’t have free will in the first place? It’s essentially already saying what you do is determined by ____ rather than a choice made freely. Believing in free will apparently entails belief that your decisions can be made (at least to some degree) free of your emotions. They might say “No, I do what I do at that time (at least in part) because of the way I choose to be, not just the way I am.” How do you show them that they’re wrong? Is there a way to verify (1) experimentally by inducing emotional states and getting certain reactions from people? Is there another way to check the soundness of this argument? It seems like it wouldn’t be effective against someone who believed in free will (not that soundness is the same as effectiveness anyway…)

Flag Post

Topic: Serious Discussion / Do we have free will?

I am specifically asking about your emotional approach. You seem to have suggested that people can’t control their emotions, and therefore they do not have free will. I’m saying I doubt that deduction. Granted that people can’t control their emotions, how does it follow that they have no free will? Are you also suggesting that will is determined by emotion, or these are the same thing, or something like that?

Flag Post

Topic: Serious Discussion / Do we have free will?

Originally posted by DarkBaron:

If you have free will, then psychology is is a useless field.

As you’ve observed, people generally are capable of learning and applying math and logic, but a lot of people don’t. Couldn’t it be a similar situation with the ability to exert free will? Maybe people are generally capable of exerting their will, but fail to do so. Maybe psychology would still have a role then in addressing why people do this, or in training them to use it, or something like that. My next question will be related to this ‘useless psychology’ comment as well.

If you have free will, I challenge you to control which emotions you feel at which time, seeing as emotions can not only control your thoughts, but also your actions, which thereby affect your life and the progression of such life.

You’re employing a particular interpretation of free will, reading it to mean something like “complete control over all bodily functions”.

With a more relaxed notion, is it not possible that you could, for example, be able to control your breathing, your blinking, etc. but not the spread of some cancer in your leg, and still call this freedom? Couldn’t you have a kind of freedom which gives you control over certain aspects of your body, but not over others, maybe because those systems are somehow independent systems which just happen to be inside your body? People hardly say they have control over a parasite that enters their body, even though in some sense once the parasite enters their body it is a part of it. We have bacteria in our intestinal tract that I doubt anyone would claim to be directly in control of, though they are part of our body’s function.

Say we stuck you and a friend inside a giant robot suit and gave you control over the legs and arms, but gave him control over the power flow throughout the suit. You still make some choices, right? Even though your friend could even cut your connection to the things you “control”.

In short: I don’t think inability to control emotions would be sufficient enough to rule out free will. It’s natural to think that if it exists, it stops somewhere, but why suppose the boundary is at your skin rather than inside it, or even inside the brain?

Side note: I’ve always thought biofeedback is cool. Also, people take drugs to alter emotional states. Blah blah.

Flag Post

Topic: Serious Discussion / How do we prove what is real and what isn't?

Originally posted by DarkBaron:

No, I’ve done it in thousands of tones both on the Internet and IRL. Bottom line is: people don’t like being told they don’t know something – even when they don’t.

I acknowledge that this happens. Feel free to read my last post after mentally changing “people aren’t insulted because…” to “people aren’t insulted only because…” if you want.

(Extra tip: you don’t have to tell people they’re ignorant. You can ignore them or have a conversation without saying that.)

Flag Post

Topic: Serious Discussion / How do we prove what is real and what isn't?

People aren’t insulted because you call them ignorant, they’re insulted because you do so in a way they perceive as insulting or arrogant. If their response sickens you, you should probably change the way you talk. That’s the rational solution, even if you disagree with their interpretation of your posts. (Unless you think you can change everyone.)

Flag Post

Topic: Serious Discussion / noobs

Oh snap, if the bookshelf has slots which are order-isomorphic to ω+2 = {0,1,2,…, ω, ω+1}, then the set B of black books has order-type ω+1 = {0,2,4,…, ω}, and can’t fill the shelf without re-ordering the books, despite there being a bijection with the slots.

Flag Post

Topic: Serious Discussion / Should Goverments fund Private Healthcare Cleanups?

Then you’ve missed what I’m doing here. I don’t care what you do with your money, and like I’ve said in the thread a couple times already, this has nothing to do with my opinion on the topic. Again, that’s not secret code for “I support funding it”.

You’ve made a bunch of statements and you haven’t backed them up. I want you to back them up. I don’t want you to back them up because I support some position and am trying to discredit you. I don’t want you to back them up because I want you to change your mind. I don’t care what your position is. I want you to back them up because you’re making statements that you’re not backing up. Is this difficult to imagine?

The closest thing to support for your ideas that you’ve given was your question

If someone pressures me to kill a guy, does that relieve me of responsibility when I get arrested?

which was in response to the general question I asked: if society pressures someone to do something, is society responsible. I assume you asking that like a retort means you answer “no” to both, but in your special case the answer seems like a resounding “yes” as far as our society and law is concerned (at least some of the responsibility is diminished). So it doesn’t make sense as any kind of counterargument, unless you want to explain why society and law is wrong about it. When I pointed this out it looks like you just switched to talking to someone else, dodging the issue.

Apart from that it looks like you’re suggesting it isn’t a social problem in the first place. You haven’t responded to questions about that.

Flag Post

Topic: Serious Discussion / Should Goverments fund Private Healthcare Cleanups?

Originally posted by Spaghedeity:

Driving is an awful example because it’s impossible to travel without using roads that the government is responsible for maintaining.

And yet there’s similarities. I’m just getting the impression that you’re sidestepping anything that might lead to you actually trying to analyze your own thoughts.

Flag Post

Topic: Serious Discussion / Should Goverments fund Private Healthcare Cleanups?

Originally posted by Spaghedeity:

How are any of those comparable to getting bigger tits?

One possibility:

Someone might get in a car and drive [choose to do something] on icy roads [with certain dangers] because if they don’t show up to work they’ll be fired [because of expectations of others].


Someone might get in a car and drive [choose to do something] on icy roads [with certain dangers], not expecting to have the bad things happen, and yet when bad things happen, society still often pays, via taxes or increased insurance prices in the future or something something.

If you don’t like the examples, why not explain why they’re not apt or something?

Flag Post

Topic: Serious Discussion / Should Goverments fund Private Healthcare Cleanups?


I’m not sure what you’re trying to say with that, but again, all I’m doing is poking at other people’s posts. When I say “Whether or not I hold a particular view is irrelevant to the arguments and questions above”, it isn’t code for “LOL U GOT ME, BUT PRETEND THATS NOT MY VIEW PLEASE”. It’s straightforwardese for “quit worrying about it” with some subtle “I say lots of things I don’t necessarily believe to see what other people will say in response because I’m interested or I don’t like their arguments, even if I agree with their position.”


Originally posted by Spaghedeity:

The person that got the implants. Nobody was forced to get them, and nobody forced them to get that particular kind of implant.

Just so this is clear, the questions you responded to in both of those posts are about whose responsibility it is if it is a social problem.

If you reject that society is part of this, then, alright. You didn’t talk to them (we’ll assume this, but it doesn’t say much), and you’re just stating again that nobody forced them either. You’re not saying much different from Moldy’s post, which is what prompted my questions in the first place. Though, we did change from “asked” to “social pressure” or “societal problem” to “somebody forced” with your last post. My questions have been about “motivation” and “pressure”, not “forcing”, and yes I think this is an important difference, since I don’t think anyone here will think that people are widely being forced at gunpoint to get breast implants, but I’m suggesting the idea be entertained that social structure could be such that there is a general motivation to get breast implants.

Anyway, if this is your case, feel free to say what your response would be to someone who tries to argue that social structure / society in general pressures women to get implants. If your response is the statement ‘nobody directly violently required them to do it’ or something like that, then I’m interested in your response to things like the suggested connection between cultural values and eating disorders, and whether you think that’s similar.

If someone pressures me to kill a guy, does that relieve me of responsibility when I get arrested?

Going forward, I’d prefer to keep the focus on society-as-a-whole pressuring, rather than individuals, when I say “society” and “social pressure”, since we’re discussing whether government / society as a whole is going to be responsible in the end.

I’m also going to read “relieve me of responsibility” as “relieve me of at least some of the responsibility”.

Anyway, depending on to what degree A pressures B, and to what degree B’s decision was dependent on that pressure, the answer is obviously yes. I can easily imagine situations where I’d consider B (at least) less responsible than otherwise for that action. This is the sort of idea that goes behind defenses like duress / entrapment and is pretty established in our culture.

Flag Post

Topic: Serious Discussion / Should Goverments fund Private Healthcare Cleanups?

Originally posted by Indy111:

So for the fault of some people, everyone has to pay?

You said “some people … everyone”, I said “society … society”, so I’m not sure we’re talking about the same thing. To help clarify, I mean society as a whole, not specifically her boyfriend or something. I’m talking along the lines of how social pressure / western culture has been suggested as contributing to eating disorders. If it’s easier to think about this, you can pretend I’m asking whether you think that, if scholars/doctors believe our culture leads to increase in the prevalence of eating disorders, the government / taxpayers should be responsible for programs that attempt to treat people with eating disorders.

Flag Post

Topic: Serious Discussion / Should Goverments fund Private Healthcare Cleanups?

Originally posted by jhco50:

And did you miss my post where I said, “I believe the manufacturers are already paying for the surgery. It is being done under a recall.”?

I saw it. I also think that the financial responsibility in this particular situation lies with whoever made the defective product and anyone (if they exist) that misrepresented the dangers. So I’m not really concerned with that aspect of this thread any more (and I don’t see what else there is to discuss in that direction). IMO there’s interesting, related things left to discuss that got brought up, and these weren’t all somehow nullified by your post. So far I’ve been concerned with

  • how dependent the decision by an individual to get breast implants is on social factors
  • how responsible society is for accidents/damages that result from socially motivated behavior
  • how the responsibilities of an individual in a society relate to the responsibilities of the society

which, if you want, have nothing to do with this particular situation, but are relevant to the more general question of whether government should fund the repair of health problems that result from elective surgery.

Flag Post

Topic: Serious Discussion / Should Goverments fund Private Healthcare Cleanups?

Originally posted by jhco50:

Matt, this is America. We are a free people and don’t want your brand of socialism.

Whether or not I hold a particular view is irrelevant to the arguments and questions above. I’m saying what I’m saying because I find certain aspects of the discussion interesting and want to see what people think and how they’re justifying their positions.

Moldy’s comment looks like it’s assuming that (1a) the people made an independent decision and (2) others aren’t responsible for such decisions. I’m questioning how independent that decision really is, and whether, if it were sufficiently not-independent, whether there’d be some social obligation to help.

Spag’s comment is similar but a bit different because it seems more individualized. He himself didn’t talk to the women, i.e. (1b) their decision was independent of him, and so he doubts that he should pay, probably because he thinks he isn’t responsible for decisions that were made independently of him, which is essentially (2) above. I answered his question (maybe it was rhetorical :S) with the kind of argument I’d expect someone to make about why an individual might be responsible, as part of a larger society, even if they did not themselves directly contribute to something that (some part of) society is responsible for.

Flag Post

Topic: Serious Discussion / Should Goverments fund Private Healthcare Cleanups?

Originally posted by Spaghedeity:
If it is society’s fault in some way like that, is it society’s responsibility even if they made an apparently individual choice to do it?

I never even talked to these women. Why should I pay for it?

Some kind of argument about how you’re a member of the society. If you share its benefits (not the breasts; general), you share its burdens. You haven’t driven on ______ Street; you still pay for its repair. People pay for Indian reservations / associated subsidies despite no direct link to the justification for that support.

If it were a social problem, why shouldn’t you pay? Who in particular should be paying instead of everyone?

Flag Post

Topic: Serious Discussion / Should Goverments fund Private Healthcare Cleanups?

Originally posted by Moldykid:

We didn’t ask those women to put these in their bodies and it is not our responsibility to pay to correct their horrid decisions.

What if they were motivated to do it because of social pressure? They could argue that they were indirectly “asked” to do it.
If it is society’s fault in some way like that, is it society’s responsibility even if they made an apparently individual choice to do it?

Flag Post

Topic: Serious Discussion / Is Math "Stronger" than God?

Suppose we have the existence of a priori truth. And I have given alleged a priori truth. This would preclude any sort of omnipotent being, no? So does that mean that in this universe, God does not exist? Even if mathematics or logic is even largely empirical, if there is at least one instance of a priori truth, this would preclude God, no?

I’m agreeing with this, except for the “so does this mean that in this universe…” part, because that step requires that this universe has a priori truths. I’m agreeing with the implication, but trying to get you to justify the hypothesis, since you want to apply it.

Hmm…well, if we have a contradiction in our system, then even with constructivist logic, we can prove that anything and everything holds see: So then, that would imply the statement “God doesn’t exist”. Which is fine if you’re willing to accept a God that both exists and doesn’t exist at the same time.

It’s also fine if you’re willing to disregard those logics as having anything to do with reality. Why shouldn’t someone deny logic, or be content with a God which exists and doesn’t exist simultaneously? Is it wrong somehow?

Flag Post

Topic: Serious Discussion / Is Math "Stronger" than God?

Even if there is a 100% omnipotent God, he still can’t defy logic.

This only works if you’ve already given a preference to logic, which is what you’re doing when you think that

defying logic is logically impossible

means anything about how reality actually works.

In short: why should logical impossibility stop anything?

Flag Post

Topic: Serious Discussion / Is Math "Stronger" than God?

Oh, I see, you’re not saying that the axioms are the truths, but that a choice of axioms and definitions necessarily leads to certain theorems and that (even if no one was around to make the choice) that would be how that system worked, independently of anything else? (I think ultimately this all comes down to the same issue anyway.)

Another example of this that I don’t know why I’m including but you might be interested in are formal grammars. These are just things that go something like
1) these letters will be acceptable: a,b
2) these rules will be acceptable: S → abS and S → a
3) we say a string is valid if you get it by starting with “S”, applying rules in 2 finitely many times, and it doesn’t have an “S” in it.

For example, S → abS → ababS → ababa shows us how “ababa” is a valid string.

Clearly, if you chase out these rules, you’ll never get a valid string with ending with “b”. Maybe this system doesn’t actually describe anything, but this seems like something that doesn’t depend on us, or anything. Even if no one is around to describe this system, the fact that if such a system were described, then no valid strings would end with “b”, seems obvious. Whatever exists or doesn’t exist in the universe, if this system somehow applied, the result are somehow inherent in the description.

The issue with that is how to make the jump from us believing that to it being a fact of reality. How do you know that it is impossible to have this system and valid strings with “b” at the end? That makes no sense, clearly it can’t be. But what does our sense of obviousness have to do with reality? We can prove it using induction on the rules applied in 2, but that pushes the problem to other methods of reasoning and axioms we consider obvious, but then why those? Clearly!

If we don’t all agree at some point that we have an intuitive understanding of the reality of the universe, then we’re stuck in this infinite cycle of “well, ok, but how do we know that?”

If we believe we have an intuitive understanding of (at least some of) the reality of the universe, how does that work? How do we know we have it and that’s what it is? The “but how” questions keep going here, too.

If we want to just accept that we have that intuitive understanding, for the purpose of progress and not having this infinite “but why” loop, then why should we consider assumptions about certain rules being meaningful better than the assumption that God exists?

Think about why we consider anything obvious for a minute. If you don’t want to accept that we have an innate knowledge of the reality of the universe, then I think the best answer we’ve got is that we think things are obvious when our brains are such a way that they seem obvious. That’s just some signal in our brains: little zaps in lumpy masses resulting from evolution and experience that has adapted over time to our environment. So, maybe we’ve got some model of the world up in our head that helps us function here, but how do we make the connection between that model and reality? We have notions like consistency and necessity, what stops those from just being approximations or shortcuts to help us function? What makes one person’s zaps in their lumpy mass better than someone else’s?

Thinking about it like this maybe even suggests that our ideas about logic and consistency and necessity are all heavily dependent on the universe and our environment, since we might only have these thoughts as a result of pressure by the environment.

Flag Post

Topic: Serious Discussion / Is Math "Stronger" than God?

Devil’s advocate:

Yes, existence of anything independent of God is inconsistent with the notion of a God that is an omnipotent controller.

Now, so that the problem isn’t just assumed and defined away: why do you think the axioms of math/logic are independent of the universe/God? (You’ve asserted that it is repeatedly, and given an example of something you’ve asserted is, but have not justified these assertions.)