on having a world

I recently went through a period of vegetarianism, because I didn’t like the idea that if I was really aware of where my meat came from, I wouldn’t want to eat it. That seems to me a pretty reasonable maxim of behaviour – if you can only do something by being unreflective about it, you should probably stop. But despite continuing to believe that, I’ve started eating some meat again.

Now I don’t want to suggest that I resumed my omnivorous ways for some principled reason – in all likelihood I gave in to convenience and the deliciousness of eating flesh. But I have been thinking, ever since, about what point in the food chain I start caring about.

A good framework for discussion can be had by considering the question transcendentally – that is, from the perspective of the necessary conditions for the possibility of asking the question at all. I need to be alive, and non-starving to even have the behavioural flexibility to decide what I ought to eat. This basic fact provides a transcendental basis for rejecting any account of what organisms we should care about which implies that my very existence is wrong. So if someone tried to argue that we should care about every microorganism, every plant and every animal equally, we can reject their position simply on the grounds that it would make life, and more particularly life which is able to make dietary choices, impossible. You can’t take a step without killing some tiny mites, or run your finger across a table without precipitating a microbiological holocaust. Since there appears to be no way to produce an organism complex enough to worry about what it eats without it killing zillions of simpler organisms all the time, we should conclude that this isn’t something to worry about.

So we know that there must be some limit, some line beyond which we are not obliged to care (or maybe not obliged to act as though we care) about some organisms. Life would be impossible otherwise. Now we’re just trying to work out where that line is. As usual SMBC provides some guidance:

Ok, so that didn’t actually help any. But I find it amusing.

Anyway, I’ve come to think that the relevant line is not (as the comic suggested) the absolute number of neurons involved, but rather whether or not the organism in question has a world. It’s the experience of sentient beings that I’m worried about, because only sentient beings have experience. I don’t care if you kill a pig in World of Warcraft, because it’s obviously not a conscious creature. I do care if you kill a person, because they have thoughts and feelings, a perspective, and so on.

Now that doesn’t really clear all that much up, because as we all know the debate about what consciousness is has been going on forever and does not appear to be coming to an end any time soon. But I do think there is a fairly plausible contender for a naturalistic account of consciousness, Thomas Metzinger’s notion of a ‘transparent self-model’. Since having been introduced to this model through his book Being No One a few years ago, it’s become my go-to way of thinking about consciousness. He describes it in the video below.

If you don’t feel like watching the video, here’s my potted summary: conscious experience is the result of an organism possessing a transparent self-model. A self-model is a kind of global workspace where your brain brings together a summary version of all the different streams of information it’s processing at any given time. It’s like a simulation that your brain runs of your world. The ‘transparent’ part means that your brain is set up to not notice that what it is dealing with is a simulation. Each of the disparate parts that are linked into the self-model treat it as though they aren’t dealing with a representation, but with the world itself. So we get this compelling, irresistible impression that we have a world.

Obviously the debate goes on, and particular features of his view could be wrong. But I think it’s an excellent starting point for thinking about conscious experience. The question then becomes figuring out which animals have transparent self-models. This isn’t easy, and I’ve had limited success. It isn’t at all clear what neurological structures would underwrite our own self-models, so we can’t just peer into the brains of animals and figure out which ones have a world. And I’m not at all clear on what behavioural criteria are the right ones to focus on.

Currently, my best approximation to an answer is to see whether an animal cares about reality. Constituting a world is making a model of reality, so inherent in consciousness is a concern that we have a correct picture of the world. For example, given a choice between finding out that my significant other is cheating on me and feeling really bad about it, or never finding out and proceeding in blissful ignorance, I would choose the painful reality. In that sense, I care more about reality that my own comfort.

Similarly, my cat is a terribly skittish little animal. Loud noises, all loud noises, make her flinch. But despite being afraid of basically everything all the time, she insists on having closed doors open. She wants to know, despite being terrified, what is in the great beyond of the stairwell outside the doors of our apartment. I take this as evidence, not conclusive, but suggestive, that she has a world.

Chickens, on the other hand, seem to not care that much about reality for its own sake. I’m told by a reliable source that to count as ‘free run’, chickens merely need to have some means of access to the outside. But given sufficient food, heat and space, they will spend their entire lives indoors, ignoring the exit which would expand their experience of reality. Again, I could easily be wrong, but I take this as evidence that chickens do not possess a world.

On that note, I’ll leave off with a video of some chimpanzees seeing the light of day for the first time in their lives. I can only imagine how terrifying it would be to have lived in a medical laboratory for 30 years, and then suddenly be turned loose into the wide world. But courage and curiosity drive them, past their fear, to look up and see the sun for the first time.

Advertisements

15 thoughts on “on having a world

  1. Hi Cory,

    Jana linked me to this article, and I found it really interesting. Here’s a few thoughts:

    I like your idea that the relevant line with which to determine whether we’re obliged to care about an organism is whether this organism ‘has a world’. There’s something right about that, and in fact it sounds to me to be a slightly different take on what at least the two big camps of ethics have been trying to get at: with some stretching, a Kantian deontologist might be inclined to agree that the capacity to operate under a self-conceived law is a mark of ‘having a world’ (in the ethically relevant sense); while a utilitarian might argue that the capacity for sentience is just this mark. In both cases, ethicality is measured in terms of some standard for ‘mattering’, and this standard itself can be (pretty) roughly resolved into the broader criterion of ‘having a world’.

    If this likening deontology and utilitarianism to the criterion of ‘having a world’ is right–and I think there’s some uncomfortable stretching in believing that it is–then a question arises about how we can refine this criterion so that it’s not so broad as to include two opposed views on the nature of ethical obligation. That’s a discussion worth having…but I want to raise two other points here.

    The first one concerns whether we’re to understand ‘having a world’ in strong representationlist terms, as Metzinger seems to believe. I remember many of us commenting in our Metzinger seminar that it was striking that he believed conscious experience was a kind of ‘on-line simulation’. Perhaps what’s striking about this is that it admits of a gap between mind and world, one that perhaps allows for the metaphysical possibility of envatted conscious experience–that experience could be what it is without achieving real contact with the world. To pump an intuition, ‘this just doesn’t seem right’.

    Enter an alternate account of what ‘having a world’ amounts to: instead of taking representationlism as our point of departure for understanding this idea, ‘having a world’ is instead analyzed in terms of a more primordial phenomenon of, in Heidegger’s terms, being-in-the-world, or in Husserl’s terms, of ‘having a life world’. This is that phenomenon that, it seems necessarily, precedes any theoretical reflection on the way things are, or any theory of what consciousness is. It is that world in which our tools for scientific measurement are manufactured, where we stipulate definitions of a ‘unit of measurement’, and where our computers are plugged into the wall before they can generate complex models of phenomena.

    Now, this alternate account of what it means to ‘have a world’ doesn’t disprove Metzinger’s representationalism, it rather just just breaks the necessary connection between ‘having a world’ and a representationalist theory of mind: while it might be true that all representationally-minded things are worlding things, not all worlding things are reprsentationally-minded (or better: not all worlding is representational).

    Assuming still your connection between having a world and ethical obligation, this alternate view of having a world seems to loosen our criteria, potentially allowing for the view that we ought to extend our ethical obligation to a greater number of creatures–like chickens. Much more needs to be said here, but I think it’s at first glance a compelling line of thought…

    My second thought is this: let’s adopt for purposes of argument your Metzingerian view of ‘having a world’. You mentioned that perhaps we can ascribe this to those creatures who care more about reality than not (to the curious cat over the sheltered chicken). Not to suggest that you believed this was in any way a conclusive argument–or even an argument proper at that–, this seems to have some unfortunate results: hermits, schizophrenics, and others with mental illnesses that involve systematic delusion are all up for tonight’s dinner.

    • Thanks for this, I think you’ve put your finger on exactly where this line of reasoning needs to be developed further. The kind of thorough-going representationalism that Metzinger’s project is framed in terms of is its most problematic feature, one which he seems to assume rather than argue for. I have to admit that I haven’t got a clear answer on how to reformulate his project if we do indeed find it necessary to reject this feature. It does seem to me that, at least some of the time, our experience is just a pre-representational given, and this pre-representational given may in fact be the most basic fact about having a world.

      But in The Fundamental Concepts of Metaphysics, Heidegger makes an interesting tripartite distinction between worldlessness (the condition of a rock, say), poverty-in-world (which he suggests animals have) and the full-blown condition of having a world which humans enjoy (section 47). On this view, animals do indeed have a world, in some sense (probably not identical to the sense Metzinger develops), but their world is impoverished. Although animals are capable of behaving (unlike the rock), they are not yet capable of comportment. He writes “The behaviour of the animal is not a doing and acting, as in human comportment, but a driven performing”(section 58).

      He even (in completely un-Heideggerian fashion) develops a concrete example! He asks us to consider a honey-bee. It behaves in a driven way with respect to, for example, a bowl of honey. If allowed, it will eat its fill of honey, and when full, fly away. In this sense, it seems to recognize honey, and even the presence of too much honey. However, Heidegger notes, if the bees abdomen is cut in just the right way, honey will flow out of it as it eats. In this case, the bee will continue to eat and eat, while the honey flows out the hole in its abdomen. “This shows conclusively that the bee by no means recognizes the presence of too much honey. (section 59)

      Now obviously we shouldn’t just take Heidegger’s word for it that animals are driven while humans comport themselves. Deciding where behaviour ends and comportment begins is precisely the original question. But I do think this provides a way of allowing that animals may have some kind of pre-representational world, but not yet a world that sufficiently constitutes them as ethically relevant agents.

    • In terms of where I see this account vis a vis the traditional ethical schools, I tend to favour virtue ethics over deontology or utilitarianism. So for me, the relevant question is what kind of attitude or character we ought to adopt with respect to our fellow organisms.

      On your final thought, I think I disagree that this line of reasoning has that danger. People with mental illness are very often distressed, on top of whatever their specific symptoms are, by the fact that their illness has the effect of disturbing their grasp on reality. They care quite a lot about reality, and are upset by the idea that something is interfering with their ability to connect with it. This perhaps gives us a way of speaking about the mind-world gap that you worried about – we are none of us perfectly connected to reality. Life is a constant struggle to overcome illusion or systematic biases, for all of us. We should neither accept a view which puts us completely out of contact with what is real (this result can be had from a straightforward transcendental argument) nor one which suggests we have perfect access to reality. Neither view can make sense of the constant struggle to see the world in more real terms.

  2. Very interesting. Unsurprisingly, I disagree with everything…

    1. I think your transcendental argument is fallacious, because the goal of the relevant sort of ethical critique is not “to produce an organism complex enough to worry about what it eats”, but for organisms to not get killed and eaten. Nothing in that latter goal requires that we produce organisms which can recognise that goal. If the consequence was that no intelligent beings could ethically exist, then that’s the consequence, and it’s perfectly consistent.

    (It’s also not clear that complex plants couldn’t contemplate ethics without either walking or running their fingers across anything, but I’ll put that aside)

    Of course, the conclusion you’re trying to get to – that it’s a good thing that there are reflective, intelligent beings, and hence anything which is necessary for that is morally acceptable – is pretty plausible. I’d certainly accept it. But it derives from an intuitive endorsement of human existence which may be mere anthropocentric prejudice, not from a ‘transcendental argument’.

    2a. I don’t think your behavioural test is a good one. Firstly, it seems that it would be quite possible to have a transparent self-model, but with not only no awareness that it is a model, but also no idea what a ‘model’ could even be. Indeed, as you present it, that’s largely the point of the ‘transparency’.

    This means that, far from it being true that “inherent in consciousness is a concern that we have a correct picture of the world”, I think much consciousness might be entirely incapable of such concern.

    2b. Secondly, the behavioural observations you take as indicating this kind of ‘love for truth’ seem to be describable without positing any such *second-order* representations. It seems to me that curiosity, contentment, boredom, fear, inquisitiveness, etc. can all be described in (apparently) first-order terms. So curiosity for instance could just be a certain sort of attraction towards unfamiliar things in the world, or towards currently-perceivable familiar things which memory or intuition suggests are routes to unfamiliar, not-currently-perceivable things (it might seem like thinking of something ‘not-currently-perceivable’ is a second-order representation, but I’m not sure it has to be – it could just be an extension of remembering things as far-off and nearby).

    If that’s the case, then your criterion amounts to privileging inquisitive animals over either easily-satisfied or pervasively timid animals. Inquisitiveness is I suspect a good indicator of intelligence, but I wouldn’t want to make it a criterion of consciousness altogether.

    It’s especially problematic that your criterion is comparative – it requires that one motivation be stronger than another. But that could obviously be satisfied by two quite different conditions: a very powerful inquisitiveness, or a very weak capacity for fear or contentment.

    3. For what it’s worth, the criteria I would tend to apply in answering roughly this question is first to look for instances of novel behaviours, i.e. deployment of instinctive action patterns in new or open-ended ways, second to infer consciousness/enworldedness for all normal specimens of those species, and thirdly to infer (with less confidence) consciousness/enworldedness for all normal specimens of related species.

    My reasoning is that we want to capture the difference, firstly, between a conscious human action and a reflex, and, secondly, between a conscious human action and the action of a sleepwalker (or an animal whose forebrain has been removed).

    Complexity of action does *not* serve to differentiate those two, since sleepwalking or reflexive actions are often complex. What differentiates them is the capacity for innovation. *But* this capacity is only displayed quite rarely, in many people – many if not most of our conscious actions follow old, efficient patterns. So the presence of novelty is a good sign of the presence of consciousness, while the absence of novelty is usually just a sign to look harder, but eventually a sign of the absence of (some relevant sort of) consciousness.

    • It is indeed unsurprising that you disagree. I was actually hoping to get your reaction in particular, because I know you have strongly held beliefs on this subject.

      1. I’m not sure I get your criticism here. The goal of the ethical critique is not what is relevant, it is the conditions for the possibility of ethical critique which I’m interested in.

      2a. I agree entirely that possession of a transparent self-model does not imply that the possessor is aware that it is a model. You’re quite right that that is precisely what ‘transparent’ means in this context.

      But the whole point of having a model of reality is that it tracks some features of what is really real. That’s what it’s for. If the basic coordinates of our experience are defined by such a model, then I don’t think it is implausible to say that we are deeply committed, by virtue of having a self-model, to a concern with reality.

      2b. Here, I simply concede the point. The behavioural criteria are very rough, and I’d like them to be a lot better. In all probability, there will be no single behaviour that is necessary and sufficient for calling a creature conscious. What would be preferable would be to have a lot more insight into our own process of constructing a world, such that we have more specific things to look for in animals. But at least I didn’t trot out the hackneyed old ‘mirror test’, right?

      3. This seems promising. One would one not just novelty, but *structured* novelty, appropriate to the situation.

      • 1. Ok, so what I mean is that if someone says “you shouldn’t kill anything, anything at all” and you say “but then I would have to not exist” they don’t have to care. They can say “sure, you should kill yourself for the greater good.” If you then say “but that violates the conditions for the possibility of ethical enquiry” they also don’t have to care, and can say “you should violate the conditions for the possibility of ethical enquiry for the greater good.” That’s perfectly consistent.

        2a. But what sort of ‘commitment’ are we talking about? It seems like you want to say that all conscious beings are committed to a certain concern, without being aware of this commitment (we can’t be aware that we’re committed to wanting our model to be a good model, if we don’t know what a model is).

        I can see how ‘commitment’ can be used in that way, to mean something like ‘x has good reason to pursue y’, or ‘x would be inconsistent or irrational if they did not pursue y, given the other things they do pursue’.

        But *that* kind of commitment doesn’t predict people’s behaviour. It’s an evaluative claim about what’s most in accord with reason, not an empirical claim about the facts of their psychology. So we can’t find out whether beings are ‘committed’ in this sense by observing their behaviour.

        2b. Fair enough.

        3. Right, exactly. Things like finding a new object and working out how to use it as a tool to get food.

        • 1. But someone making that argument is only able to do so because they are large and complex. A great many tiny things had to die in order for them to even be in a position to worry about it. The thing they are arguing against sustains the possibility of their making the argument. Banging on the keys of ones keyboard kills a great many little organisms. So the argument can only be made from a position which is self-undermining in a practical sense.

          Now that I think about it, this disagreement probably rests on our previously discussed disagreement about the status of pragmatics in defining epistemic and metaphysical questions. (it’s here, for those of you who just joined us: https://ctlewis.wordpress.com/2011/08/02/what-was-reductionism/#comments )

          2a. I think this is a really good question, which I should have addressed in the original post. I’d like to propose that a concern for being aligned with reality isn’t a desire on a par with others. The idea I’m toying with here is that a concern for truth is a structural feature of having a world. If what it is to be conscious is to have a transparent model of the world. That model has to be constantly updating as you move through the world, and to function as a model it needs to be keyed into finding real patterns in that world. There are at least two claims here: 1) that our minds are, as a matter of fact, designed such that thinking something is real is a precondition for caring about it and 2) that caring about what is real is at least partly constitutive of what it is to have a world.

          • 1. “the argument can only be made from a position which is self-undermining in a practical sense.”
            I get that. But that doesn’t mean they’re wrong. It means, at worst, that they’re hypocrites (even that may not follow – they might be staying alive only to persuade others to die with them, or they might not exist, but be the voice of your conscience). But hypocrites are often right in what they preach.

            Here’s an example. Suppose someone comes to you one day wearing sweatshop-produced shoes, sweatshop-produced clothes, etc. and tells you that you shouldn’t buy or use anything produced in a sweatshop. Noticing their hypocrisy might make you lose respect for them, but it’s irrelevant to whether their claim is true.

            2a. “The idea I’m toying with here is that a concern for truth is a structural feature of having a world.”
            Very plausible, but then I don’t see why we should think that the behavioural observations you mention tell us much about this structural feature. A chicken’s brain can be keyed in to finding patterns even if the way it deals with those patterns is by sticking to the familiar ones (sticking to your characterization of chickens).

            • 1. The disanalogy with the sweat-shop shoes is that the possibility of preaching about sweat shops does not depend on having those shoes. It cuts deeper than hypocrisy I think. It is as though you tried to play a board game where one of the rules is that you’re not allowed to play the board game. I take it that ethics is about the choices that we get to make. If your ethical system produces the result that all choice is based in immorality, the whole enterprise of figuring out what is good comes crashing down.

              2a. I take it that, taking these observations in the context of what we know about how our own psychology works, careful observation, and some somewhat creative inferences, we can get some kind of a handle on the motivational schema of various animals. As I’ve said, hard and fast criteria are hard to come by. But curiosity surely isn’t a useless indicator. Novel behaviour indicates that some flexible process is at work – presumably the reason why having a self-model is useful is that in increases behavioural flexibility and coherence. But as I said, I also find this unsatisfying.

  3. @ Cory’s first reply:

    If we analyze different ways of having a world in terms of the distinction between being driven and acting, and then assume that this suffices to make the ethical distinction between what we are and aren’t obliged to morally consider, then I think there’s a mistake in the implicit view that there’s a one-to-one correspondence with moral obligation and agency: many of the thorniest moral problems come when we are dealing with moral patients, not agents. Additionally, some of our strongest moral intuitions center around moral patients as well: infants, the incapacitated, etc.

    So, I think that attempting to map having a world onto ethical obligation in terms of Heidegger’s distinction between being driven and acting is for these reasons flawed (although you’d be right to point out that animals, in virtue of being only driven, still aren’t moral patients, as moral patients have to satisfy some other criteria for moral consideration for which merely being driven is too weak but also for which complete agency is too strong).

    @ your second reply:

    Yeah, many people with mental illness are indeed troubled by the fact that there’s a large mind/world gap, and therefore they seem still to care to accurately represent the world. But it just seems right to me that many are completely ‘offline’ and hence, by your criteria, not enworlded because they no longer even attempt to represent reality accurately. I do see, though, that this point is arguable, as much depends on what exactly is involved in *attempting* to represent reality with accuracy.

    Very briefly, this brings my to a point that was lurking in my first post but didn’t really come out: the idea is really murky and in many ways unsatisfying, but I do wonder a lot about enactive approaches to mind that attempt to resist the representationalist’s impulse to fundamentally understand cognition as modelling, instead preferring to prize embodied agency–which might include things Luke was talking about, such as the capacity for dynamic, complex, novel behavior, etc.–as the mark of the mental. The worry is that I don’t really understand how we’re to make sense of things that representationalism seems to capture, like accuracy of representation, the difference between illusion, belief, and knowledge, etc. Makes me wonder whether enactivism has to do away with correspondence theories altogether, in favour of coherence theories or something.

    • I guess what I’m suggesting is that in order to even count as a moral patient, a robust world is necessary. I don’t regard non-living things as moral patients, because I’m highly confident that they don’t care about their existence. I do care about infants, because they have the potential to become moral agents in a way that, for example, an oyster just never will.

      As for people who are truly ‘offline’, I have to wonder whether we do indeed have ethical obligations to them. But I would never be willing to classify the mentally ill as offline. If we’re talking about people in a fully vegetative state, then their status as moral patients depends, if I’m right, on whether they have the potential to come out of it enough to have a world. If you proposed, for example, the farming of headless human clones grown in a lab for meat, I would say that’s really gross, but probably not unethical.

      I’m not sure that enactivism can help answer this question, since even single cells are autopoietic and so ‘have a world’ in that sense. I take it that it would be a catastrophic result if I was ethically obliged not to exfoliate because of my duty to my own skin-cells. Some higher-level unity is necessary, and I suspect that even the enactivists wouldn’t want to suggest that 2nd order unity can be understood simply in terms of autopoiesis. I suspect that something like Metzinger’s phenomenal self-model provides the right kind of unity, but I think we share a concern that our capacity for representation and belief get cached out in terms of something not itself representational.

      • Yeah, that would suck. But I don’t see enactivism and autopoiesis being necessarily tied. I think Thompson talks about how it’s unclear whether autopoiesis can sustain higher-order instantiations, and hence I’m not even quite sure how it applies to the mind proper.

        I’m more attracted to it’s general rethinking of the ‘mind/world relation’. Perhaps it would allow us to capture a sense of ‘having a world’ that makes our putative ethical obligations to many other animals–chickens included–more understandable.

  4. I wrote this response before seeing any of the others (I started it shortly after the original post). I am never quite sure on what ethical basis to place my views, but like you I would tend to virtue ethics as making the most sense, but below I meander through the sorts of arguments that seem to me relevant to the question. Just a quick point, all life needs to displace and therefore destroy other life to flourish, plants create all kinds of toxins to destroy microbial life and must compete for a place in the sun, microbes likewise compete for food and release toxins (antibiotics) etc.. It may be that all such destruction is unacceptable, in which case we are morally obligated to seek out and destroy all life, as we know it, in the universe, let the universal holocaust begin!!!

    This may just be a recapitulation of what you say above, but my perspective is something like this. The SMBC comic presents a sort of simple Benthamite picture of the ethical question, assume pleasure minus pain equals net utility and is a sort of simple state of mind easily measured (in terms of universal units of utility the util). Under this sort of view I could take an insurance policy against you, kill you painlessly, use the money to help the starving in Africa (or some other util raising endeavour such as attaching electrodes to the pleasure centres of a large number of rabbits) and the net number of utils that all creatures capable of pleasure and pain could easily remain the same or even increase significantly. Of course a rich interior life creates all kinds of other pain associated with death that can not exist otherwise such as the anticipation of death and the pain of loss felt by others, so my example is a bit too simplistic. Still it’s not the net pleasure minus pain that seems to matter to us about death (since a painless death is possible). Rather death (as opposed to the pain that may be associated with dying) is fearful and to be avoided because it thwarts an individual’s plans, our ambitions and long term goals and this would also seem to be a reason we find it heinous to kill people.

    Having a world is clearly a necessary condition for having such plans, it is unclear if it is sufficient (they might just be synonymous). Also, like having a world such goals are necessarily externally directed. The goal of a happy marriage is not to believe you have a happy marriage, rather it is to actually have a happy marriage. So you seek a justified true belief that the state of affairs has been achieved or remain unsatisfied in your goal (as in the “am I dreaming pinch me” response to good news). I used to think of this in terms of individuality, i.e. one chicken’s mind is much like any other chicken’s, so they are expendable, but on reflection this might not really identify the right cleavage point. Simple minds (or non-minds) can be diverse as much as complex minds, but it’s not the diversity that matters or so I feel. Still I might also like to find a way to say that the unique character of each person’s ambitions is part of what gives it an irreplaceable and extra valuable character in my understanding.

    I’m disinclined to identify some single criterion between animals we should care about and ones we should not. I think that some creatures with rudimentary nervous systems (say a sponge or a starfish) are probably no more capable of sensations like pleasure and pain than a plant, but chickens might lack a world or long term plans and yet have enough of a mind to experience pleasure and pain in a way comparable to a human and therefore not completely above moral consideration, but the consideration is clearly much blunted and of a different kind. Conversely chimps, dolphins and whales may have a very rich mental life, but yet I’m not willing to weigh their moral worth as equal to humans.

    Something like this gives a rational for why we would find roving packs of cannibals hunting and killing vegetarians in a human community objectionable and something which we should put a stop to, but wolves hunting deer in Yellowstone is something we not only tolerate but to some extent encourage. If deer lack a world or long term goals (or whatever is the super utility generator) then all that really matters at most is the average number of deer in Yellowstone over a long term outlook. Presuming, as we do, that deer population’s long term existence is actually made more stable by predation then the wolves of Yellowstone actually net increase the utility for deer in general (even if bad for some deer) and of course for wolves. Whereas even if predation of the vegetarian community by cannibals might be one way to stabilize the population it would not be amenable to the same sort of simple Benthamite calculus and a solution involving less harsh population control becomes more attractive (of course beings capable of more complex mental constructs are also more capable of adapting their behaviour another reason brute force methods of population control may not be preferred).

    Of course by my lights it seems, while meat eating might be defensible, it still poses a hazard in terms of the risk of inflicting suffering on an animal without a compensatory benefit to any other creature (because of mistakes and failure of a humane slaughter practice in certain eventualities). This is a pragmattic worry instead of an in principle one it seems to me (for example the worry only applies to human consumption of meat because pragmatically we can not regulate wolves behaviours, nor do they have the insight to do it themselves). On another track, it may be argued that animal slaughter damages or is contrary to the virtue of compassion. For these sorts of reasons, vegetarianism strikes me as perhaps a supererogatory act. I would say that humane livestock rearing and slaughtering practices are to my way of thinking obligatory, but I don’t think it is required they be perfect (otherwise the closest we might get to perfection is not eating meat). Sadly I don’t think we have yet achieved best practice in terms of humane rearing and killing, and to indict myself I can’t say I spend much energy on the issue.

    Or is this all a rationalization to allow me to eat tasty meat?

  5. It’s funny, the dilemma about which animals were edible was one of the things that pushed me towards vegetarianism way back when.

    I’ve been reading and thinking about food ethics lately, too, and the pros and cons of vegan, local, organic, whole food, Paleo etc. I even, briefly, considered adding meat back into my diet.

    For me, the issue isn’t what an animal thinks or feels because the line between “aware” and “not aware” is too close to call for my comfort. In animal experiments, the methodology must be approved by a rigorous ethical committee – unless you are working with lower animals, in which case anything goes. I can’t remember exactly where the line was drawn, but there was a clearly defined one and it was probably somewhere along the vertebrate/invertebrate divide. Anything smarter than an earthworm is neurologically complex enough that it has at least some level of awareness, but you can do whatever you like to that earthworm without the ethics committee getting too worked up.

    For me the issue of choosing a good food animal is less about what the animal is aware of and more about what it eats. Most food animals were domesticated because they eat things that are inedible to humans, like grass. Not only are these animals converting non-food into food for humans, but herbivores tend to be tastier than carnivores.

    Also, domesticated animals really are different creatures than wild animals. We’ve been selectively breeding food animals for as much as 8,000 years in order to make them calm, manageable, not too bright, and nicely fat. I was just reading yesterday that farm turkeys have been known to turn their gaping mouths up at the sky to watch the rain fall and will *drown themselves*.

    As long as the animals are treated well and aren’t fed “people food” (I’m bothered by the environmental and social implications of growing grains and feeding them to animals), I don’t see any ethical difference between red meat, chicken or fish. Of course since many food animals are fed unnatural foods in unsavory living conditions, the ethics of eating meat get fuzzy again at the grocery store. I’m keeping my eyes open for grass-fed beef or free-range chicken for sale in town to feed to the rest of the family.

    • Josie! Hi!

      You’ve raised a whole set of issues here that are definitely on the table (so to speak) as well. Even if no animals feel anything at all, supposing Descartes was right and they’re all just machines, we would still have to think about the environmental and social consequences of factory farming livestock. Thinking about that dimension, a vegan diet looks more reasonable than if we’re just concerned about animals as subjects. Producing eggs, milk, cheese, etc. are all environmentally intensive in the same bad ways as producing meat.

      But the social/environmental perspective has another interesting consequence, which I think is underappreciated – the quantity of animal products becomes very important. The environmental impact of having a bit of gelatin in your jelly-beans is orders of magnitude different than the impact of eating a steak every night for dinner. Maybe it’s a hangover from christian puritanism, but nobody seems to talk about the ethical difference between eating a little bit of meat some of the time, and eating a whole lot all of the time. The result is this rigid form of veganism where if any animal product is listed, even if it’s the 12th ingredient after red dye #50, eating that product is as bad as a BLT sandwich.

      I’d be curious to know exactly where the line is that you have to start getting bioethical approval to do experiments. The vertebrate/invertebrate distinction is probably ok in most cases, but octopuses are by all accounts pretty darn aware of themselves.

      Anyway, thanks for commenting!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s