?

ログイン

Ground morality in one hundred words or less. Points will be deducted for each additional word. - Jackdaws love my big sphinx of quartz [entries|archive|friends|userinfo]
Scott

[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

Ground morality in one hundred words or less. Points will be deducted for each additional word. [Nov. 21st, 2012|01:05 am]
Scott
[Tags|]

Consider the following argument:
If entities are alike, it's irrational to single one out and treat it differently. For example, if there are sixty identical monkeys in a tree, it is irrational to believe all these monkeys have the right to humane treatment except Monkey # 11. Call this the Principle of Consistency.

You are like other humans, not an outlier from the human condition. You have no unique talents or virtues that make you a special case.

You want to satisfy your own preferences.

So by Principle of Consistency, it's rational to want to satisfy the preferences of all humans.

Therefore, morality.

Does this argument go wrong, and if so, where?

It feels like cheating to me. And if I had to cash out exactly why, it would be a lack of belief in categorical rationality, rationality that can tell you what to want independent of ends. "It is rational to want" seems like a weird category error, and my description of the Principle of Consistency sort of sneaks it in by conflating epistemic and instrumental rationality.

On the other hand, a lot of people do believe in categorical morality and in fact get really upset when moral theories aren't categorical and can't tell them what to want from first principles. I wonder if those people would accept this as a valid grounding of morality.

PS: The Internet confirms my intuition that "less" is used correctly in the title of this post, but I still don't really understand why.
linkReply

Comments:
Page 1 of 9
<<[1] [2] [3] [4] [5] [6] [7] [8] [9] >>
[User Picture]From: snysmymrik
2012-11-21 06:32 am (UTC)
>>>You are like other humans, not an outlier from the human condition. You have no unique talents or virtues that make you a special case.

Or you could say that every single human has unique talents and virtues.

Your argument hands on the (ill) definition of uniqueness.
(Reply) (Thread)
[User Picture]From: squid314
2012-11-21 06:34 am (UTC)
I mean, I'm sure you have unique talents like that you play the clarinet better than other people or something, but it doesn't seem like this puts you in a completely different moral category that allows special moral predicates to apply to you.
(Reply) (Parent) (Thread) (Expand)
[User Picture]From: lonelyantisheep
2012-11-21 06:43 am (UTC)
It would be weird to treat one monkey differently if you had sixty identical monkeys. If you had a regular bunch of sixty monkeys though, you might want to treat some better or worse depending on what traits you value, what behavior you want to reinforce, etc.
(Reply) (Thread)
[User Picture]From: Roy Stogner
2012-11-21 02:30 pm (UTC)
In fact, if you had sixty *identical* monkeys, in the extreme definition of identical that the second half of this post tries to use, then by definition you're *already* treating them all the same, no matter what you do - if you torture monkey #11 it violates the preferences of monkey #11 no more and no less than it violates the preferences of monkey #12.

If you allow the monkeys to be merely "symmetric" rather than entirely identical, then you have to consider the possibility that it's "rational" (in whatever "I have a solution for the is-ought problem, I swear" sense of the word) for humans' utility functions to be merely "symmetric" as well.
(Reply) (Parent) (Thread)
[User Picture]From: ari_rahikkala
2012-11-21 06:44 am (UTC)
Identical things deserving identical moral consideration doesn't imply similar non-identical things deserving identical (or even similar) moral consideration. The difference between a barrel of wine and a barrel of wine with a drop of sewage in it might be small in some terms, but I know which I'd rather have.
(Reply) (Thread)
From: (Anonymous)
2012-11-21 06:51 am (UTC)
I've heard this argument before, and rejected it because I just don't care that much about consistency. If the argument is intended as a just-so story for morality, it only works for people who care about consistency, which is a small section of the overall population.

The Principle of Consistency seems like it would lead to utilitarianism if taken seriously. Thus, I will take this comment as an opportunity to talk about utilitarianism and practical implementations thereof. Sorry for the off-topic-ness!

Suppose you are a utilitarian, and you have the true utility function. Now you just need to optimize it. But optimizing utility is not something that an individual can undertake on his own. One person acting alone cannot make all that much of a difference.

Instead, we can think of society as a mechanism for maximizing utility. Observe that this contrasts with usual discussions of utilitarianism, which ask "what can an individual do to maximize utility?" Now we are taking a higher-level view and asking "what can society do to maximize utility?" It's a distributed system: each human in the society will contribute to maximizing the utility. How should we implement our distributed optimization procedure?

One solution is to have each individual human try to maximize total utility. Then everyone will be working together to optimize this common function. But another solution is to have each individual human try to maximize his own personal utility. Here, we're dividing up the objective function into pieces, and saying "give each human in the distributed system one piece of the equation to optimize". This would also seem to maximize utility. Of course, in practice, people's preferences will conflict with one another, which will add some constraints that need to be addressed when combining the solutions from different pieces of the distributed system. (If anyone here knows what Lagrangian relaxation is, that's what I'm thinking of.)

These are two extremes: each human tries to maximize the grand utility function, or each human tries to maximize his own tiny piece of that utility function. Humans seem to operate on something in the middle: we normally maximize our personal utilities, but we prevent some of the inevitable conflicts through empathy (an emotional desire to maximize another person's utility) and through social norms (a prescriptive, deontological system which tells people not to do specific actions that tend to decrease others' utility).

Sorry if this is incoherent!

-lucidian
(Reply) (Thread)
[User Picture]From: mu flax
2012-11-21 07:27 am (UTC)
My thoughts on this:

1) I accept the Principle of Consistency. It is, in a sense, just a different way to formulate what I call locality: that given two situations A and B, when presented with the same information about each of them, one must act the same towards them. In other words, there is no hidden (non-local) information that can influence a moral choice.

2) I see no reason to believe that all humans are alike, morally speaking. They are obviously not alike in a great deal of properties, and you are simply begging the question here. I don't even have to argue for a kind of Gnostic "divine spark" that makes some people morally special (even though I think such an argument can be plausibly made), I just have point to Haidt's foundations (and the strong observed disagreement), significant differences in intelligence and neurodiversity.

3) Lastly, even if 2) were true, or you'd redefine it for a smaller subset of people sufficiently like you, you'd still not get morality out of it. I'd agree that it would follow that you should treat preferences of this group like your own (and so you should be strategically altruistic, to a certain degree), but you *still* have to show that your own preferences are moral to begin with.

If you just assume that whatever you want is automatically moral, you're just assuming moral subjectivism a priori, and the argument does no actual work.

Consider the Devil himself reasoning that because all demons are equally damned, it is not just useful but *moral* for them to cooperate on their preferences to oppose God, when it would actually be moral for them to repent.

(This is not an endorsement of any epistemic position towards the Devil or strategic advice for the Hordes of Hell.)
(Reply) (Thread)
[User Picture]From: squid314
2012-11-21 05:47 pm (UTC)
A lot of people seem to be rejecting the "all humans are sufficiently alike" principle. But in order to thwart the argument, I feel like you not only have to prove the relatively easy proposition that humans are not actually alike, but that the differences among humans occur in such a way that you deserve special positive moral treatment (so that you can focus on your own desires but ignore others' desires consistently). In other words, the differences between humans have to be such as to grant muflax alone special moral status.

(or muflax and a small group of others selected for some objective non-indexical criterion. That would also create a morality, albeit not a very inclusive one. If you think only white people have moral value, you're a racist but at least not an error theorist)

The Devil's problem in your example seems to be insufficient abstraction; the Devil reasons from "I want to be damned" (wait, does he? or is this just a consequence of his other desires that he is insufficiently assiduous in avoiding?) to "I should want everyone to be damned" instead of "I want my desires satisfied, therefore I should want everyone's desires satisfied." It's the same wacky Golden Rule mixup as "I would like people to give me cheeseburgers, therefore I will give everyone cheeseburgers, even if they are a vegetarian."
(Reply) (Parent) (Thread) (Expand)
(no subject) - (Anonymous) Expand
[User Picture]From: andrewducker
2012-11-21 08:22 am (UTC)
You want to satisfy your own preferences.

So by Principle of Consistency, it's rational to want to satisfy the preferences of all humans.


It's the step between these two that falls down (for me).

I want to satisfy my principles, therefore I want other people to be allowed to satisfy theirs. (A quite differerent statement).

However, I recognise that the satisfaction of some preferences will prevent the satisfaction of others. Therefore we will need some kind of framework for ensuring that some people aren't (too) privileged over others in their satisfaction.
(Reply) (Thread)
[User Picture]From: andrewducker
2012-11-21 08:24 am (UTC)
Oh, and yes, rationality takes axioms and turns them into conclusions through logic. It can't tell you what axioms to start with. If our axioms differ (because: taste), then our conclusions will differ, and so will our moralities.

I value freedom (somewhat) higher than equality, others feel the opposite. Neither is _right_, because there's no such thing as "right" in this case, they're just value judgments.
(Reply) (Thread)
[User Picture]From: mantic_angel
2012-11-21 08:26 am (UTC)
The inclusion of "You are like other humans, not an outlier from the human condition. You have no unique talents or virtues that make you a special case." kind of highlights the failure of the argument: this whole paragraph *feels* like it can be cut, but it's fundamental to connect the two ideas - the first paragraph is assuming identical monkeys.

For me, the most distinct failure mode is "Oh dear god I would be terrified of an AI programmed with this philosophy". This probably says more about me than the actual problem, but I've found it's a very useful litmus test for morality.

(Reply) (Thread)
[User Picture]From: squid314
2012-11-21 05:48 pm (UTC)
See my response to muflax on that point here.
(Reply) (Parent) (Thread)
[User Picture]From: maniakes
2012-11-21 09:14 am (UTC)
As a practical matter, I have much better information about my own preferences than about the preferences of other humans, and between that and transaction costs, I have a competitive advantage seeking to satisfy my own preferences over seeking to satisfy the preferences of other humans.
(Reply) (Thread)
[User Picture]From: squid314
2012-11-21 05:49 pm (UTC)
Right, but we're not talking about practical morality, we're talking about whether we can ground morality at all. You use your refinement of the idea to create a different moral system, but it's still a moral system.
(Reply) (Parent) (Thread) (Expand)
From: (Anonymous)
2012-11-21 09:17 am (UTC)
It's cheating, of course. Morality is complex, and the more general and categorical a moral rule gets, the probability of it being actually wrong shoots up. Symmetry and consistency are better treated as "mere" factors in moral calculus, instead of Rules to Rule All (note though that I'm in the "high expected convergence" camp when it comes to CEV, i.e I believe that humans Truly Incompatible with me are rare).
(Reply) (Thread)
[User Picture]From: st_rev
2012-11-21 09:38 am (UTC)
You are like other humans

BZZZZZZZZZT

This is Kantian/universalist nonsense. I differ from other humans in a clear, important and relevant way: I'm the guy who's always right here. There is an overwhelming difference between me and other people in terms of both information and efficient agency. I know much more about my own preferences and priorities, and it is much more efficient for me to take action to pursue my own goals than it is to pursue the goals of others. It is absolutely rational for me to single myself out and treat myself differently, given this differential in knowledge and action, and it's doublethink to pretend otherwise.
(Reply) (Thread)
[User Picture]From: sunch
2012-11-21 09:42 am (UTC)
Ah, good timing!
(Reply) (Parent) (Thread) (Expand)
[User Picture]From: sunch
2012-11-21 09:41 am (UTC)
Obviously it's completely rational to single out yourself - since you're already singled out by being the only person whose actions you can directly control, whose feelings and thoughts you can reliably perceive and whose very existence depends (to a very large extent) on your own actions.

Besides, the most rational thing a living organism can do is to make sure that as much of it's genes as possible continue to exist for as long as possible after that organism dies. Therefore, tribal morality!
(Reply) (Thread)
[User Picture]From: simplicio1
2012-11-21 02:09 pm (UTC)
>Besides, the most rational thing a living organism can do is to make sure that as much of it's genes as possible continue to exist for as long as possible after that organism dies.

Just curious if this was said jokingly or seriously, before I go into a long-winded explanation of why it's wrong.
(Reply) (Parent) (Thread) (Expand)
(no subject) - (Anonymous) Expand
From: (Anonymous)
2012-11-21 12:26 pm (UTC)
Treating people the same doesn't even lead to good morality. Would these moralists want me to treat my wife like any other woman or my children like any other children of the same age?
(Reply) (Thread)
From: printf.net
2012-11-21 02:13 pm (UTC)
I think you're confused -- satisfying preferences involves equal consideration of those preferences, not literally equal treatment.

Still, as a society we've decided that children are best cared for by their own loving parents. That's not incompatible with giving equal consideration of interests, since (fortunately) it's most often the case that parents actively want to look after their own children.
(Reply) (Parent) (Thread) (Expand)
(no subject) - (Anonymous) Expand
(no subject) - (Anonymous) Expand
(no subject) - (Anonymous) Expand
[User Picture]From: xiphias
2012-11-21 12:45 pm (UTC)
"Self" and "other" is a fundamental distinction. You can't just handwave it away.
(Reply) (Thread)
[User Picture]From: marycatelli
2012-11-21 01:28 pm (UTC)
It's just fine as long as you presume that the significant "likeness" is humanity.

Plato's Republic has a swerve where they start to talk about men and women and relevant differences vs. irrelevant ones.
(Reply) (Thread)
[User Picture]From: drethelin
2012-11-21 03:21 pm (UTC)
Isn't the converse of the Principle of Consistency that when entities are dissimilar to you you should treat them unequally? Care less and less about the preferences of entities as they are less similar to you, starting with yourself and moving onto family, etc.

And isn't that basically what Hitler supported?
(Reply) (Thread)
From: danarmak
2012-11-21 03:30 pm (UTC)
It's also basically what I support. I treat non-human animals worse than humans, insects worse than mammals, and bacteria worst of all.
(Reply) (Parent) (Thread)
From: danarmak
2012-11-21 03:29 pm (UTC)
> You want to satisfy your own preferences.

> So by Principle of Consistency, it's rational to want to satisfy the preferences of all humans.

Wrong. The correct next step is: by the Principle of Consistency, it's rational for each individual to want to satisfy their own preferences. There is no reason to proceed from everyone wanting to satisfy their own preferences, to everyone wanting to satisfy everyone else's. Also, ISTM that the argument relies on a hidden assumption of moral realism.

Compare this scenario: you are in a duel to the death. You and your enemy are alike in every morally relevant respect. Only one of you will live.

Should you "rationally" want to satisfy your enemy's preferences as much as your own? Should you agree to flip a coin to determine who kills the other, instead of fighting for your life? The principle of consistency says yes. I say no.
(Reply) (Thread)
From: (Anonymous)
2012-11-22 05:44 am (UTC)
> Should you agree to flip a coin to determine who kills the other, instead of fighting for your life?

Digression: if my enemy and I are equally matched, then both of us should prefer such an agreement (if it were enforceable) to a duel that could leave _both_ of us dead.

-orthonormal
(Reply) (Parent) (Thread)
[User Picture]From: eyelessgame
2012-11-21 03:46 pm (UTC)
Well, for one thing, the preferences of humans often conflict, making it impossible to satisfy all humans' preferences, and requiring some sort of arbitration to decide which ones should and should not be satisfied.

But ultimately, what's controversial about your statement? I can want every person to be happy (="preferences satisfied"). Most people, I think, want everyone to be happy. They just strongly disagree on what preferences other people ought rationally to have, and what actions need to be taken for the greatest number of people to have their preferences satisfied.
(Reply) (Thread)
[User Picture]From: eyelessgame
2012-11-21 03:51 pm (UTC)
For example: people opposed to gay rights do not oppose gay people being happy. They just think other things ought to make gay people happy, i.e. they should be in contented heterosexual relationships.

In the same way that if person X prefers for person Y to be dead (e.g. intends to murder person Y), I do not want person X's preferences satisfied, nor do I want person X to be unhappy. Instead, I want to change person X's preferences, and allow person X to be happy through satisfying those changed preferences.

Many people's preferences are to change the preferences of others. Sometimes we're not wrong.
(Reply) (Parent) (Thread)
From: (Anonymous)
2012-11-21 03:47 pm (UTC)
There are a lot of things wrong with the argument.

1. Monkeys do have reason to single out other monkeys. For example, being bonded to another specific monkey entails that if harm were to come that monkey then the other bonded monkey would suffer. Either bonded monkey has reason to place more moral value in the other. I.e., treating scope insensitivity as if it didn't exist or shouldn't exist is stupid for obvious reasons (and *really* stupid for more subtle reasons like the problem of demandingness).

2. Every human has unique specifics that make them a special case: experiencing as a specific human (i.e., your conscious and individual perspective). Other people don't inhabit other people's bodies, and for that reason it makes perfect sense to judge and view things from the subjective perspective (rather than the objective person of "human condition" and "other humans").

3. Personally, I don't want to satisfy my own preferences. I want to modify my preferences such that I satisfy my ideal preferences. And preferences are malleable. (This might *seem* incoherent to some people because they don't grasp the recursion.)

4. Even if you wanted to satisfy the preferences of all humans, you psychologically wouldn't allow yourself to even attempt such a feat, and so you would most likely ignore that reasoning or rationalize not doing *everything* possible like a Peter Singer. Take that to the meta-level, and you now have a consequentialist reason for not endorsing consequentialist reasoning.

5. What is the teleological purpose for morality? Evolution and culture are designers by the nature of their constraints; morality has a purpose. I suspect it's to help societies flourish. So, trying to satisfy the preferences of humans at the individual level makes little to no sense to me.

6. I prefer the color blue to all other colors such that I would like to see more blue in the world. It is therefore rational for me to act such that there is more blue in the world. Really, is it? I actually would prefer spending my time reading, for example. Likewise with acting on moral reasoning.

(Reply) (Thread)
From: printf.net
2012-11-21 07:36 pm (UTC)
4. Even if you wanted to satisfy the preferences of all humans, you psychologically wouldn't allow yourself to even attempt such a feat, and so you would most likely ignore that reasoning or rationalize not doing *everything* possible like a Peter Singer. Take that to the meta-level, and you now have a consequentialist reason for not endorsing consequentialist reasoning.

But if everyone wanted to satisfy the preferences of all humans as much as Peter Singer does (or at least get rid of extreme poverty), presumably it would happen extremely quickly. So that doesn't seem like a good argument against having society internalize that morality.
(Reply) (Parent) (Thread) (Expand)
(no subject) - (Anonymous) Expand
(no subject) - (Anonymous) Expand
[User Picture]From: eyelessgame
2012-11-21 03:55 pm (UTC)
One other thing. Doesn't your Principle of Consistency write from the point of view of being outside the tree of monkeys, deciding (as the human observing the tree) how you will treat the monkeys? If we're all monkeys here, there's no other entity observing from the outside point of view to generate this Principle of Consistency. This probably repeats what others already commented...
(Reply) (Thread)
[User Picture]From: squid314
2012-11-21 06:02 pm (UTC)
Why should that make a difference? If you start climbing the tree, does the moral status of the monkeys change?
(Reply) (Parent) (Thread) (Expand)
[User Picture]From: erratio
2012-11-21 06:22 pm (UTC)
(not looking at other comments yet)

So in optimality theory in linguistics, you basically have a whole bunch of rules, and each language ranks the rules in a different way to yield the languages we know and love. There's a catch though in how you're allowed to formulate the rules, and I believe that these restrictions on formulation are a cognitive universal, although I'm not 100% sure.

Basically, rules can refer to one thing, or they can refer to all things (of a certain category), but they can't refer to N things or all-N things. There are also meta-rules about what makes a category, what locations in words/sentences can be targetted as 'one thing', and so on, but that's less relevant to morality. This kind of formulation also satisfies the Principle of Consistency: treat things in the same category the same way.

Saying that rule X should apply to every monkey but #11 is clearly not kosher, because this doesn't yield a coherent category.
Saying that I want my own preferences to be satisfied is clearly kosher, since 'myself' is very clearly an instance of 'one thing' and there's no requirement that I extend it.

The morality part would come in when you start trying to include anyone else's preferences in yours. Saying that I want my own preferences and those of that guy over there to be satisfied is clearly not kosher, because 'myself' and 'that guy' aren't a coherent category; the only way to start caring about 'that guy' is to build a coherent category that includes both him and me and then care about that whole category - maybe 'white people' or 'students' or 'humans'.

And then the real problem is determining what constitutes a coherent category. Even more problematically, people seem to have different intuitions about this. 'People from the same neighbourhood/city/country' seems like an asinine category to me, but lots of people use it all the time. I also would have said a priori that 'earning above threshold X' isn't a coherent category, but the Occupy movement seems to say otherwise.
Your original argument assumes a priori that we should be using the category 'humans'. That doesn't necessarily follow. Maybe there's a meta-rule telling us to construct the smallest coherent category possible, and the people who care about all humans have either encountered too many to categorise nicely or just suffer from a defect in being able to construct parsimonious categories.
(Reply) (Thread)
[User Picture]From: maniakes
2012-11-21 09:35 pm (UTC)
Upon further consideration, I'm not sure the Principle of Consistency is required by rationality at all. Not unless you're defining rationality in very narrow terms that require a particular form of utility function. Why would it necessarily be irrational to pick one monkey and declare "This is my monkey. There are many like him, but this one is mine"? If that preference is sincere, I don't see it as any more irrational than a Paperclip Maximizer.

By presupposing that there's such a thing as a "right to humane treatment", etc, and that it must apply equally to all monkeys, you're sneaking your conclusion into your premises.
(Reply) (Thread)
[User Picture]From: squid314
2012-11-21 11:04 pm (UTC)
I think this is the same thing I mean by the distinction between epistemic and instrumental rationality.

If we treat morality as an objective fact, like there really is such a thing as "right to humane treatment" which monkeys either do or don't have, then it would be weird to suspect without evidence some distinction between monkeys, just as it would be odd to point Monkey #11 and say "I bet that monkey, and none of the others, has liver cancer".

If we treat morality as being about your desires, then of course you can randomly choose one monkey and do whatever you want with it; it's not rational, but desires aren't supposed to be.
(Reply) (Parent) (Thread) (Expand)
From: (Anonymous)
2012-11-21 09:35 pm (UTC)

Where does it go wrong? Universalism?

"If entities are alike, it's irrational to single one out and treat it differently."
"You are like other humans, not an outlier from the human condition."

Between the two quoted lines, I seems to me like some sleight of hand has been going on, smuggling in a universalist assumption that isn't fully spelled out, and so I'm not sure how to phrase my complaint with it.

Potential objection the first: No, I am not like other humans. I am not other humans. I'm me. I am already singled out. I have greatly privileged access to information about myself and power over myself. I should likewise privilege satisfying my own preferences first and only secondarily the prefences of other people.

Potential objection the second: This argument proves too much. If I switched out the second sentence with "You are like other entities, not an outlier from the readers-of-this-argument condition", it would apply to every mind that could read the post.

Consideration of rebuttal to objection the second: You could accept the switch and claim that the argument grounds a very general morality, applicable to all minds capable of perceiving an argument, and that all such minds should therefore cooperate. This strikes me as absurd, so I'll focus on the alternative: You could reject the switcheroo and attempt to describe why it causes the argument to breakdown.

Here I posit a rebuttal invoking something like the psychic unity of humankind. I think attempting to specify it in detail would lead towards strawman territory, so I will go straight to my counter-objection: that there is only a difference in degree, not kind, along the spectrum from "You are like other clones from the same DNA" to "You are like other family members" to "You are like other humans" to "You are like other Earth-evolved intelligences" to "You are like other perceivers of this argument"; with the last group still having more psychic unity than intelligences in general.

-Erik
(Reply) (Thread)
From: (Anonymous)
2012-11-22 12:11 am (UTC)

dk

It seems to me that this argument equally well proves that it is "irrational" to try to win a game. Few people consider games immoral, and I don't think this is ever the reason that they do. (In particular, no one considers it moral to play games, but immoral to try to win against adults.)

Some people endorse moralities that obey the principle of consistency, but reject the morality of the individual preferences (for reasons other than conflict with other people). As an extreme example, Clippy. Or many economics models care about agents' preferences according to their wealth, which is to say that hold no one to have an inherent "right to humane treatment."

Incidentally, the inclusion of language of rights in the principle of consistency seems to already assume the conclusion of the existence of an objective external morality, but this can probably be better hidden without changing much.
(Reply) (Thread)
[User Picture]From: simplicio1
2012-11-22 03:25 pm (UTC)
>Ground morality in one hundred words or less. Points will be deducted for each additional word.

Hm, I'll try.

Sentient beings have needs for themselves & their loved ones that are subjectively imperative. We don’t experience all others’ needs as imperative, but we can see the symmetry in our situations, so moral discourse takes an idealized impartial perspective on preferences for dispute resolution. Morality often expresses itself in propositions, but it also involves a speech-act of ‘taking a stand.’ It is sometimes instrumental in nature, in which case propositions are common (“your policy won’t really help the poor”), sometimes terminal, in which case its nature as a ‘legislating’ speech-act is obvious (“I will not stand for animal cruelty”).
(Reply) (Thread)
Page 1 of 9
<<[1] [2] [3] [4] [5] [6] [7] [8] [9] >>