103 thoughts on “Objects, Identity, and Concept-Formation”
Very true, but by the same logic, there is no such thing as “functional programming”. Sure, you can come up with a definition for “functional” just like you can come up with one for “object-oriented”, but it certainly wouldn’t stick in every situation (just as “object-oriented” fails to).
I think we could strengthen your claim a little bit by saying that “there is no such thing as a language paradigm”. Functional, object-oriented, logical; they’re all just arbitrary labels that we plaster onto nebulous concepts because our puny minds can’t hold sand with a spatula.
My point is that claiming that “object-oriented” *specifically* is a non-entity is just a little bit inflammatory. Very true, but it’ll ruffle some feathers.
I use “functional programming” strictly to refer to programming with mathematical functions. It’s entirely clear when you have stopped using functions, which is when you introduce a side-effect such that your program is not referentially transparent. I’ve seen the term “functional programming” used for all kinds of things which are not that, but I think such use only adds confusion.
I’m not sure I agree that “there’s no language paradigm”. There is declarative style as against imperative style, and typed calculi as opposed to untyped ones, etc.
Aristotle doesn’t use those exact words, but calls such things “absurd”. See Metaphysics under the discussion of the law of the excluded middle. Aristotle makes a distinction between a thing being or not being on the one hand, and a proposition being true or false on the other hand, implying that propositions can be neither true nor false insofar as they don’t refer to things that _are_. Also see “On Interpretation” for discussion of propositions that are neither true nor false, and the last sentence from part 6 of the Physics:
“If it contains in the case of sensible things, in the case of intelligible things the great and the small ought to contain them. But it is absurd and impossible to suppose that the unknowable and indeterminate should contain and determine.”
Incidentally, Aristotle also talks about “the impossibility of the void” in the Physics, and identifies the fallacy of the reification of zero. My objection to the term “object-oriented” in the English language is essentially the same as my objection to a NULL term in programming languages, which I believe to stem from the same fallacy. See my post “Noumenal Null”.
I tend to agree with your definition of “functional programming”. However, that doesn’t mean that it is the correct one. Much like “object-oriented”, there is no consensus on what it means. Must a functional language be lazy (Haskell) or strictly evaluated (ML)? What about monads? Does the type system have to guarantee all of this, or can a functional language by dynamically typed (lambda calculus)? Can a language be “partially-functional” (Scala)?
What’s worse is “functional” often seems to encompass a number of features which are merely outgrowths of pure-functional languages (think: closures). While I tend to think that “object-oriented” in a far more ambiguous position, this is probably due to the fact that people using functional languages tend to have some familiarity with language formalization.
Since we agree on the definition of “functional programming”, we may refer to this definition whenever we speak of it. But note that concepts are not social conventions. They are tools of cognition. Being correct doesn’t imply that anyone agrees, or that there’s a consensus. What I call “functional programming” you may label anything you want, as long as when we speak of it to each other, we have an objective definition to which to refer. By “objective”, I don’t mean “neutral”. I mean “consistent with the specific facts under consideration”. See here for an objective definition of objectivity: http://aynrandlexicon.com/lexicon/objectivity.html
I’m not sure what meaning to attach to “functional language”. I usually say “purely functional language” if I’m talking about a language that enforces referential transparency.
Thanks for the post. Very interesting. I look forward to your follow-up articles.
Your “OO doesn’t meet criteria #1” point is very convincing. As professional developers / engineers we should learn to deal with problems at the correct level of abstraction, without requiring the training wheels of physical analogy.
That said, while I’m about 90% in agreement with you, I’m not yet convinced that the Emperor has no clothes. I agree that the term has become a poorly defined grab-bag. But despite the ambiguity, there are still techniques and strategies that most people would consider “OO”. When we think of these as tools to be used where appropriate (and not religiously), then they still have a place in every programmer’s repertoire, IMO.
To give some examples of how “OO” techniques that I’ve personally found very profound:
– “Everything’s an object” – as embodied by languages such as Python or Lisp. Allows us to work with “entities” at a level above hardware defined types, and to treat them consistently.
– Message Passing – I think that learning to divide a program into entities that communicate by passing messages can be a very good way to structure certain types of problems.
Perhaps your follow-up articles will be enough to convince me that both ideas are really just silhouttes of some more profound underlying concept(s).
If every thing is an object, and every object is a thing, then thing = object. Of course, this is not what people mean when they say that. What they really mean is implied by “treat them consistently”, which is more precise. What you mean is that the algebra is closed. In the case of Smalltalk, for example, it is closed over late-bound lambda expressions with an implicit “self” argument. This is fairly precise, so I’m willing to concede that if there’s anything to be called OO, then it’s merely this. In that case it seems to boil down to late vs. early binding, and a.f(b) instead of f(a, b). Reminiscent of the argument of whether to open eggs on the big end or the little end.
I agree that message passing models are interesting and useful, for example the “actors model”, which provides a perspective on concurrent programming with side-effects, and of course the pi-calculus.
Certainly too many to enumerate. The best resource has been seeking out people who are smarter than I am, made easier now that we have the internet. That said, some great eye-openers include, in no particular order:
Structure and Interpretation of Computer Programs by Abelson and Sussman. Databases, Types and the Relational Model by Date and Darwen Introduction to Objectivist Epistemology by Ayn Rand
Anything at all by Aristotle Why dependent types matter by Thorsten Altenkirch A Gentle Introduction to Haskell by Hudak, Peterson, and Fasel Divine Proportions: Rational Trigonometry to Universal Geometry by N J Wildberger Fluid Concepts and Creative Analogies by Douglas Hoffstadter
Once the course is set, it becomes necessary to fill the void with a good treatise on logic, such as “Logic: An Introduction” by H W B Joseph; a good overview of philosophy, such as “A History of Philosophy” by Wilhelm Windelband; a good discrete maths book; and a categories introduction such as “Conceptual Mathematics” by Lawvere and Schanuel.
I think you’re correct that “everything is an object” = closed algebra. And I specifically meant late-binding. I would have defined it less precisely as: “everything” (data, environment, functions, code tree, etc.) is an accessible runtime “object”.
In my experience, this encourages some profoundly different programming techniques when compared with languages like C or Java. And with a language like Haskell, it’s something entirely different again. So I don’t know how to classify these techniques as anything other than “OO”.
Also, I don’t think the choice of a.f(b) or f(a,b) is quite as specious as you suggest. Even Abelson and Sussman made the distinction in SICP where they say about message passing:
“An alternative implementation strategy is to decompose the table into columns and, instead of using ‘intelligent operations’ that dispatch on data types, to work with ‘intelligent data objects’ that dispatch on operation names”
Now, syntactically this would always be (f a b) – since the book uses Scheme. But the difference in how you conceptualize the runtime behavior is quite dramatic, IMO.
:)To restate my position (hopefully not beating you over the head). I think that despite the ambiguity of the term “OOP” and the vast amounts of noise and misinformation, there really is some “meat on them bones”, in the sense that certain techniques that most people would agree are “OO” (even if they couldn’t tell you why), provide useful tools for creating and reasoning about programs.
Still, great article, and thanks for the discussion.
Instead of classifying those techniques as “OO”, I might suggest classifying them as having “first-class semantics”. Somebody please chime in if you know the formal name for this kind of language design approach. For example, in LISP, every lexical construct is a first-class entity in the language itself (code is data is code). The same goes for Python and Smalltalk.
“Intelligent data objects that dispatch on operation names” says to me: “first-class modules”.
In the end it’s all binary. No objects, no functions, no statements, no procedures, just 1’s and 0’s and a computer processor. The purpose of object-oriented programming is to make it easier to code; it’s to make the processor do what you want it to do (move electrons if you want to be specific). Easier coding is the idea behind different “levels” of programming languages. It’s why you’ll hear C++ described as a “high-level language” and machine code as a “low-level” language.
Object-oriented programming is a programming technique, not an abstraction. There’s no disorientation to be had. “Object-orientation” is a “valid concept” as it is a useful tool for creating sequences of 1’s and 0’s which are then processed by the microprocessor. It is combination of functions and variables, which is useful to the programmer. It is not an “abstraction that is made in error” it is a tool made in earnest. Would you say functions are “invalid concepts” because they are just specialized subroutines? Stacks are out as they are merely specialized arrays?
I’m not sure if your attempt at empiricism is meant as satire, but it is refreshing in an endless sea of rationalists. However, I think you’re wrong on several accounts, as empiricists necessarily are. In fact, your argument applies at every conceivable level of detail, and is equally wrong at each one. So you might say it’s “fractally wrong”.
1. Programming has very little to do with computers. To paraphrase Edsgar Dijkstra: “programming is as much about computers as astronomy is about telescopes.” It isn’t in the end all necessarily binary. To wit, you can take a simple program and execute it by hand, on paper, or in your mind. The nature of the physical machine isn’t accessible at the level of programming, and it’s not necessary to know it. The level of abstraction provided by programming languages omits this fact, because it is an abstraction. There must be some process that the programming directs, but it can be any process, not necessarily a binary one. That is the nature of abstraction. On the other hand, following your empiricist argument to its absurd reduction, there are no ones and zeroes, there are just electrons anyway, so nothing is really binary, and there are no computer processors, just lumps of atoms in an elaborate configuration. There are then no thoughts either, just firings of neurons, which in turn don’t really exist since they’re just lumps of organic matter which is again just a configuration of atoms.
2. “The purpose of object-oriented programming is to make it easier to code” presupposes that you and I both know what you mean by “object-oriented programming”. I don’t, and I don’t think you do either. But you’re on to something, which is the antidote to your empiricist disposition: The purpose of abstraction is to make it easier to think. Higher levels of abstraction let us think about things in more general terms. A “low-level language” such as assembly (and, to a degree, C) helps us think in terms of a specific machine. A “high-level language”, such as Haskell or LISP, lets us think in terms of any machine. And a yet higher-level language, such as pi-calculus, lets us think in terms of the governing dynamics of processes.
3. Functions are not “specialised subroutines”. The concept of a function is centuries older. A subroutine is one way to implement a function. Nor are stacks “specialised arrays”. An array is one of many possible ways of representing a stack. Here again is the principle of abstraction, of the some-but-any. A function is evaluated by some but any process. A stack requires some but any memory structure to hold the elements on the stack. See “Purely Functional Data Structures” by Chris Okasaki for example.
Oh I get it. The universe is a computer program. “Objects,” as you understand them, are proposed entities within that universe. However, they are non-real and are therefore analogous to mysticism. In other words, you think we live in The Matrix, and your helpful advice to Neo would be to think of Agent Smith not as human, nor as an object in an object-oriented program, but as a set of procedures. Neo would then say “Whoa, dude!” and facepalm, because he would realize if he had thought of them as object-oriented he might as well be a religious fundamentalist believing in invalid concepts. He takes your advice and pushes all thought of object-orientation out of his head. He then becomes disoriented and confused while trying to figure out how Agent Smith’s arm is moving in the Matrix with respect to the taste of orange juice in the Matrix, and is killed. Even more tragically, it turns out The Matrix was programmed in Java, and thinking about Agent Smith as an instance of public class AgentSmith, would have made it a lot easier to understand than trying to figure out how the orange juice procedure affected Agent Smith’s fisting speed.
If I understand all that cryptic blather correctly after peeling off the layers of sarcasm, you’re accusing me of rationalism in opposing your empiricism. That accusation is unwarranted, since it’s a false dichotomy. The choice is not either reality with no recourse to concepts (Empiricism, Materialism) or concepts with no recourse to reality (Rationalism, “The Matrix”, Plato). The former means, ultimately, that there’s no such thing as consciousness. The second, ultimately, means that there’s no such thing as reality. At the heart of this false dichotomy is another: the mind-body dichotomy, or the “analytic-synthetic distinction”. I reject this distinction and uphold a third view, Objectivism, in which concepts are objective, i.e. not independent of reality, but abstractions made from the facts of the material world by an entity of a specific kind, for the very reason that it is not omniscient or infallible: a human mind.
If that’s not what you meant, then I’m afraid you’ve lost me. Further “Matrix” references will not be helpful because I only saw the first one and I didn’t like it.
I don’t see how your Randism factors into object-oriented programming, despite the word “object” which I hope is not where your argument stems from. The concept of objected-oriented programming has nothing to do with existentialist metaphysics.
This, I think, sums up your error:
The term “object-oriented” is at least honest in that it says what it implies, which is a frame of mind, an orientation of the programmer
The term object-oriented is not a frame of mind, or an orientation of the programmer. It’s a framework of a program, an orientation of the program. Program, not programmer. It is separate and distinct from the mind of the programmer. The programmer may choose to create a program using this framework for whatever reason they want.
The term “object-oriented programming” has nothing to do with anything. There isn’t any such thing (except, perhaps, as commented by Matt; see above). If you have evidence to the contrary, please supply a succinct definition with a genus and differentia, and an example or two so that the rest of us may validate the concept by repeating the induction that led you to it. So far all I’m getting is that you know it intuitively or you heard it from somebody else. In that case, maybe you can tell me what “numberwang” is. I hear that 4 is numberwang, but 5 isn’t. Or maybe it was the other way round.
I don’t think “object-oriented” has anything to do with objectivity or Objectivism, though they have a common word stem there. I don’t know, ask a linguistic analyst, but I think that’s a red herring.
Ok object-oriented programming is a programming style(genus) in which both functions and variables are unique to a hierarchial referent(differentiae). How’s that?
Can I understand this as saying that any programming with mutable first-class modules is object-oriented programming? It’s very similar to what Matt was saying earlier in that case. I’ll consider that in my second follow-up post, on the distinction between values and variables. Thanks.
Ok I looked up what mutable first-class modules means, and yes basically I would say that. (I think I would include constant objects, immutable objects, as well). So, basically, first-class-modules-oriented programming = object-oriented programming. To me, programs are binary flowing through logic gates. Everything else is programming style which is merely intended to help the programmer produce the necessary logic for the input. I look forward to your follow-up…
I somewhat agree: there’s no such thing as “object oriented programming” at language level.
Of course, there is object oriented analysis and and object oriented design :).
If I understand correctly, you’re equating concept with theory, that’s very powerful but I don’t think is the everyday use of “concept” and maybe causes some confusion.
IIRC, Bertrand Meyer’s “Object-Oriented Software Construction” has a pretty solid definition the OO theory. (I didn’t read it, but Brian Cantwell Smith’s “On the Origin of Objects” seems to cover it from the metaphysical point of view).
I would not give much weight to Paul Graham, his accumulator definition in haskell shows either ignorance or dishonesty.
OOP doesn’t have “nice” properties and few warranties from the prog. lang. POV (aka “OOP sucks”), but modeling the computing space abstractions based in the domain space abstractions proved to be very powerful.
I’m not convinced about the existence of “object-oriented analysis and design”, in the sense that such a thing is somehow separate and different from analysis and design that isn’t “object-oriented”. But I don’t know. It seems arbitrary anyway, like a ritual with an accidental practical benefit (medieval physicians using leeches comes to mind). I have read some of Bertand Meyer’s book (many years ago), and I think he is brilliant, but he seems to be rationalizing “OO” rather than providing the method for its discovery. That is, he seems to be patching an a priori notion of “OO” so that it achieves modular software construction. I will have to read the book to convince myself either way. Brian Cantwell Smith looks interesting. Thanks for that recommendation.
Like I say, it’s a loose grab bag, so you can omit very well anything you want and still call it Object-Oriented if that floats your boat. That said, you can readily do away with “inheritance” by using composition. As for polymorphism, well, many languages employ polymorphism and wouldn’t be called Object-Oriented by any stretch.
See “What Does Substitutability Really Mean?” by Chris Date for a good discussion on the matter of subtyping and polymorphism (from a decidedly non-OO point of view, granted).
As for polymorphism, well, many languages employ polymorphism and wouldn’t be called Object-Oriented by any stretch.
Any serious discussion on the subject would have to incorporate Luca Cardelli’s Theory of Objects into the mix. From a theoretical standpoint, object oriented programming is based modeled via co-algebras.
Cardelli and Wegner have classified four types of polymorphism: Parametric, Inclusion, Overloading, and Coercion. A language like Haskell does not have inclusion (subtype, subclass) polymorphism but has Parametric (generics) and Overloading (type classes).
The typical idea of object oriented programming incorporates three tenets:
1. Encapsulation
2. Inheritance
3. Subtype Polymophism
These things are possible in practically any PL but an OO Programming Language would be one that actively facilitates this model.
Good comment, thanks. I did have a brief look at “A Theory of Objects” before writing the post. The essential unanswered question is: what is that theory for? Why would one need it, or arrive at it, if one did not already have a notion of object-oriented programming? It’s all very formal, but seems arbitrary.
I think that Inclusion (subtype polymorphism) and Coercion significantly overlap. Consider a language that supports implicits. If there exists an implicit function (a:A -> b:B) in a given context, then A logically implies B in that context. It’s not clear to me how that differs from A being a subtype of B (in context). The crucial notion here is substitutability (see What Does Substitutability Really Mean? by Chris Date for an excellent discussion).
That’s a nice link! I haven’t seen that before, and that underscores my point, that there isn’t any special kind of programming that is OO (admittedly, picking on OO specifically is a little unfair). Programming, abstractly, essentially, is done in the universal language of logic. Euphemisms serve only to shield programmers from this fact. Writing code in a specific programming language (or a specific kind of programming language) is (or ought to be, rather) the last step, in which one translates the language(s) of one’s program into the terms of the chosen programming language.
That’s about the silliest thing I’ve ever heard. The world is full of concepts which are not logically induced. Like “liberal” and “conservative”, for example. Thinking about everything in the world though mathematical terms is a cognitive defect. If you tried you’d find determining if any particular concept is valid would be NP complete. Rather we must rely on statistical reasoning. It’s possible to look at some code and see if it’s “OO-ish” the same way you can evaluate an animal and see if it belongs to a species (obviously not possible through logical induction!)
Excuse me, but evaluating whether an animal belongs to a species is deduction. Induction is coming up with a grouping into species by essential differences in the first place.
The world is indeed full of foggy, undefined rubber-terms like “liberal”, “conservative”, and “object-oriented”. Passively accepting them makes your thinking vague, which is a cognitive defect if I ever heard of one. Thinking is hard and requires careful checking.
It’s funny for someone to claim that thinking in math terms is a cognitive defect, then to go on to make general claims about universals, using mathematical terms. “Determining if any particular concept is valid would be NP-complete?” “We must rely on statistical reasoning?” Let’s not use mathematical terms or anything.
At any rate, you’re just making that up. Or to put it another way: It’s highly probable that it is claptrap.
Excuse me, but evaluating whether an animal belongs to a species is deduction. Induction is coming up with a grouping into species by essential differences in the first place.
I assumed you were talking about mathematical induction, but when people talk about “induction vs. deduction” they’re usually talking about inductive reasoning.
The distinction is pretty important, because inductive reasoning isn’t a cognitive defect, it’s a logical fallacy. An example from Wikipedia:
All observed crows are black.
Therefore:
All crows are black.
Which is close to the canonically wrong “black swan” example from that trendy bullshit eponymous pop-sci book making the rounds.
So if you’re actually talking about Inductive reasoning, you’re even dumber then I thought. Of course, on the other hand it’s perfectly reasonable to define a programming language through mathematical induction from a few simple premises.
Here’s more from wikipedia:
Inductive reasoning has been attacked several times. Historically, David Hume denied its logical admissibility. Sextus Empiricus questioned how the truth of the Universals can be established by examining some of the particulars. Examining all the particulars is difficult as they are infinite in number. [2] During the twentieth century, thinkers such as Karl Popper and David Miller have disputed the existence, necessity and validity of any inductive reasoning, including probabilistic (Bayesian) reasoning [3]. Scientists still rely on induction nevertheless. That, however, is exactly what Popper and Miller dispute: Scientists cannot rely on induction simply because it does not exist.
At any rate, you’re just making that up.
And you didn’t make up the bit about “valid concepts being defined through induction”?
When I said “Determining if any particular concept is valid would be NP-complete?” I thought you were talking about mathematical induction. Since Inductive reasoning isn’t logical, it doesn’t have the same problem, but it’s mostly just a stand in for more advanced forms of statistical modeling. If you go by pure inductive reasoning you could say something like “all the swans I’ve seen are white, therefore all swans are white”. Whereas a more advanced person would say “I’ve seen n swans, they’re all white, therefore there’s an x% chance that at least 99% of swans are white” and then they might look for observational biases.
The world is indeed full of foggy, undefined rubber-terms like “liberal”, “conservative”, and “object-oriented”. Passively accepting them makes your thinking vague, which is a cognitive defect if I ever heard of one. Thinking is hard and requires careful checking.
Who said anything about “passively accepting them”? They still exist and are still useful. Almost every concept we deal with in our daily lives is only superficial. Take a “car” for example. Seems pretty concrete, but there are vehicles for which it is difficult to determine if they cars or trucks, but it would be difficult to communicate with people without being talk or think about “cars” (and each make and model of car could still have gray cases. What about a Honda Accord that had its engine swapped with a Jetta’s?)
What I’m attacking here is your theory of conceptual validity. If it was true, how could you induce the concept of induction itself?
Thinking is hard and requires careful checking.
Thinking certainly seems to be quite a challenge for you. And if you can do it through logical deduction (which is certainly NP-complete) of axioms arrived at through induction (which is a logical fallacy) then clearly you’re doing it wrong.
And if you think you can do it through logical deduction (which is certainly NP-complete) from axioms arrived at through induction (which is a logical fallacy) then clearly you’re doing it wrong.
Your comment posted just fine as far as I can tell, unless you mean some other comment than the rude long one above. Listen, if you’re going to be rude, take it somewhere else. I have no obligation to read it or post it, whoever you are.
It should be clear from the context whether mathematical induction or inductive reasoning in general is being discussed.
Regarding “Inductive reasoning is a logical fallacy.”:
1. No it isn’t. Hume was wrong. I don’t subscribe to his empiricism (or rationalism for that matter, i.e. Popper). There is no “problem of induction”. This problem is invented by committing the error of dropping context. It amounts to arguing that, since it’s not possible to reason from specifics in a given context to a generalization that holds universally in all contexts, induction is therefore invalid. It’s obviously possible to reason from specifics in a given context to a generalization that holds given that context. But if you’re a Kantian, you will necessarily miss that since the context demanded of the knowledge of universals (noumena/forms) in the Kantian/Platonic view is nothing less than omniscience.
2. Isn’t that statement an example of inductive reasoning? You have to know what “inductive reasoning” is and what “logical fallacy is”, by induction no less, before you can deduce that one is an example of the other. Of course, this is assuming that you’re reasoning at all and not just parroting Wikipedia.
It is not necessary to understand induction in order to form concepts by induction. Put another way, you don’t need to understand epistemology to acquire knowledge. “Induction” as a concept (a meta-concept, if you will) is an integration of a vast amount of information, and yet a child uses this process to form his very first concepts.
Look, this discussion involves the very nature of existence and consciousness, so we’re not going to solve it here. Philosophers have been duking it out over this for a very long time indeed. To sum it up, inductive reasoning is assumed by the early philosophers (Socrates, Aristotle), and attacked by modern empiricists and rationalists (Kant, Hume, Popper, although defended by Locke). I reject these attacks, and I’m not alone in my rejection. See “Introduction to Objectivist Epistemology” for a solid defense of induction, and Aristotle’s Posterior Analytics as a supreme example of induction at work in philosophy. See also “Induction in Physics and Philosophy”, a lecture by Leonard Peikoff. A book by the same name, based on that lecture, is in progress by David Harriman. You can read examples of Harriman’s work here: http://www.theobjectivestandard.com/topics/science-technology.asp
Ah, I didn’t realize you were an objectivist. That explains it.
It is not necessary to understand induction in order to form concepts by induction. Put another way, you don’t need to understand epistemology to acquire knowledge.
I never claimed you needed to understand inductive reasoning in order to use it, what I disputed was that inductive reasoning was the only way to form valid concepts. If you can’t arrive at inductive reasoning through inductive reasoning, then inductive reasoning isn’t even “valid concept” itself. And in that case, your definition of “valid concept” contains invalid concepts.
If on the other hand, you were willing to allow concept formation by other means, then perhaps you could find a way to form the concept of induction. But then you would have to accept the validity of any other concepts that could be derived in the same way.
In either case, your statement about conceptual validity would be wrong.
Your argument amounts to showing that induction is invalid by showing that induction is not inductive. This argument presumes what it intends to refute.
Your other premise is that “you can’t arrive at inductive reasoning through inductive reasoning”. This premise is absurd. Of course arriving at a concept of induction requires a process of induction. There is no problem with that. It is not circular, but (to pun a little bit) recursive. I can see how it would be a problem if you hold the primacy-of-consciousness premise, that induction does not exist until somebody thinks of it, but this premise is false. Existence has primacy. Reality is mind-independent. The human mind is a specific kind of thing and its primary means of grasping reality, for the survival of the human, is induction. Identifying this fact requires induction, but this is not a problem because we are identifying it with human minds.
I am not willing to allow concept formation by other means. If you arrive at something by any other means, then you have arrived at something that is, by definition, not a concept. When I say “concept”, I specifically mean a generalization from the essential similarities of units, omitting specific differences. This is a description of the process of induction. If you mean something else when you say “concept”, then you’re talking about something that has no bearing on the present discussion.
Okay so you’re saying that inductive reasoning (not including mathematical induction) exists intrinsically in the universe regardless of the human mind, but we can discover it through it’s own application. That seems reasonable, but I still don’t think that goes far enough to prove the non-existence of other valid concept formation methods, after all if induction is “out there” the other valid concept formation methods are there as well.
Of course you might just say that other types I might think of using would count as a kind of induction as well, but I would say that any useful concept formation method would have to tolerate some exceptional cases, otherwise any attempt to create concepts that apply to the real world would need an enormous list of caveats, and would not be useful for communication or thinking (IMO).
I don’t exactly follow what you mean, but I’d say it like this: induction exists as an attribute of the human mind, regardless of that mind’s (or any mind’s) knowledge of it.
Do note that you cannot prove the non-existence of something. Proof does not apply to things that don’t exist. To call for the proof of a negative is a logical fallacy.
You hit on an important point with the tolerance of exceptional cases. You will find that Objectivist epistemology does indeed describe a method of concept formation by induction in which concepts are tolerant to new information, because they are contextual. If the context expands (e.g. an exceptional case is found), the concept gains a new definition in the expanded context, without contradicting the previous definition. To borrow OO jargon, the expanded definition becomes a superclass of the previous one.
I really enjoyed the essay. I find the definition of a valid concept a little abstract though. Can you give an example of a concept which meets the 3 criteria outlined? Would the theory of evolution count?
Yes, Darwin’s theory of evolution is a good example of the inductive method at work. The theory of the atomic structure of matter is another superb example. You don’t have to look far to find concepts that match the criteria outlined. For example: “three”, “criteria”, and “outline”.
I don’t exactly follow what you mean, but I’d say it like this: induction exists as an attribute of the human mind, regardless of that mind’s (or any mind’s) knowledge of it.
Lots of things exist as attributes of the human mind that are actually illusions. As an Objectivist, you’ll appreciate that most people mistakenly believe in religion, and one powerful reason for that is that people have an innate tendency to anthropomorphize things. We automatically assign motivations to things based on their behavior and how we imagine what would motivate another person or animal to do those things. That’s great when you’re running around in the jungle — you get an intrinsic feeling for dealing with other people and animals. But it also causes people to believe that the world as a whole is another mind.
Induction could be similar. Just because it exists in our minds doesn’t mean it ‘exists’ as anything other then an approximation or a built-in shorthand.
I think you’re shifting the context here so we’re talking about two different things. When I say “attribute of the human mind” I mean that induction is part of the nature of the mind. The mind exists, and induction is an essential part of what it is. When you say that many things that people believe are “actually illusions”, that’s true, but only because inductive reasoning is not automatic or infallible. Induction is a mode of cognition. Religion is not. Religion is the result of an error in inductive reasoning.
You say that inductive reasoning isn’t infaillible, but do you mean that induction itself can lead to wrong conclusions, or only that humans can fail in its use?
What I mean is, there are some fundamental laws of the universe, and some fundamental laws of Mathematics (such that they can be applied by a computer). Are you suggesting that Induction is somehow a fundamental part of the universe as well? Or is it just a “rule of the mind”?
But try as you might, there’s no objective definition of what “object-oriented” refers to. It means anything you want, [..]
This is where your argument makes a non-sequitur leap. The fact that something cannot be objectively defined does not mean it can ‘mean anything you want’. The usual example in philosophy is the fact that what constitutes a ‘game’ is not well defined. Although it is not well definied, everyone that is involved in discussions about the issue agree that some things definitely are games and some things definitely are not games. The discussion focuses on the ‘grey’ area. Of course, there are arguments that would make *everything* a game and arguments that would make *nothing* a game, but in the end everyone agrees that the term ‘game’ applies only to a limited number of concepts/situations. The term ‘object-oriented’ is of the same sort. Although it isn’t well defined, it is clear to all that there are bounds to its meaning. As such, it is not a ‘non-concept’ and you cannot say that there is ‘no such thing as OO programming’. You may want to argue that it is a useless or debilitating concept and depending on your argument I might agree, but what you are claiming here is, from a philosophical angle, too strict.
So what you’re saying is that concepts are defined by consensus, and the only thing preventing some vague notion that isn’t objectively defined from meaning anything we want is other people’s permission. Do I have that just about right?
The example of “game” is not at all like “object-oriented programming”, because there are certain essential characteristics that, like you say, make something definitely a game. The existence of these shared characteristics of games is what makes the concept of “game” useful in the first place. However, there is no essential characteristic of “object-oriented programming”. It is not at all clear when something is definitely OO, or definitely not OO.
I don’t see what you mean by “too strict from a philosophical angle”. Which philosophy, specifically? Too strict for whom and what purpose?
He’s referring to Wittgenstein, specifically. Please see “Philosophical Investigations”.
It’s actually very difficult to define the essential characteristics of a game. I invite you to try.
Firstly, let me stress that I largely agree with your blogpost. A concept like ‘object-oriented’ is often assumed to have more meaning, and more argumentative force, than it actually has. I agree that for many purposes it is warranted to
[..] tear down the cognitive package-deal that is “object-oriented” [..]
and I agree that
Of course, it does no good to tear down the cognitive package-deal that is “object-oriented” if you don’t replace it with something.
My point was merely that when I connect the philosophical (but perhaps ‘logical’ would be a better word) dots, I do not see support for the statement that
“Object-Oriented” [is] a non-concept
That may seem an unimportant detail when we end up with the same conclusion, but it can be dangerous to think of it in that way.
In your response, you imply that all things we call a ‘game’ have a shared characteristic. However, this is not the case. There are arguments that make *everything* a game: life is a game, meant to be enjoyed, but not to be taken seriously. There are also argument that make *nothing* a game: even hide-and-seek is a way of measuring skill and it influences the social rankings of the participants. If all ‘games’ were united by at least one shared characteristic, such arguments could not exist.
These ‘extreme’ arguments are not dismissed because they are wrong: they are dismissed because they do not reflect the way the word ‘game’ is actually used. Some things aren’t considered a game by anyone. This is not because ‘game’ has a meaning that could not possibly be applied to the thing. It is because — and yes, this is almost circular — nobody actually uses the word ‘game’ to describe the thing.
That is the crux of the matter: concepts are defined by their use. Even if what people mean by ‘Object-Oriented programming’ is fuzzy, it’s often still clear what they mean and you can’t dismiss their intention by pointing out that ‘Object-Oriented programming’ isn’t well defined. That may not be your intention, but it is something that you are easily led to when you have arguments to the effect that ‘Object-Oriented [is] a non-concept’. It *is* a concept and it is a meaningful concept at that. It’s also a fuzzy, convoluted, impractical concept that is often wielded as if it is authoritative by itself and your upcoming deconstruction of the concept may be very helpful in explaining it’s limitations.
On the contrary, there is no support for the statement that object-oriented is a concept. By stating that it is a non-concept, I’m merely pointing out the absence of this support, and I cannot be called upon to prove a negative.
Why is it dangerous to think of it that way? What are the harmful consequences?
If it is not the case that things called “game” have a shared characteristic, then why do we use that term? What could it possibly mean? Why is there even such a word in the first place? Of course, using the word presumes that it means something, and meaning something is to refer to something specific. For example, when you say “life is a game”, it is a metaphor, but you’re already assuming knowledge of what “game” means. When you think of games, you have some specific concrete examples ready to be called to mind, and they all share some characteristic that warrants abstraction into a single cognitive unit: game.
You have the crux of the matter correctly identified, but your conclusion is wrong. Concepts are not social conventions. What you’re proposing is a subjective view of concepts, where words are mere sounds used to tickle certain parts of the brains of other people, and where language is primarily a communication tool. But language is not primarily a communication tool. Language is a cognition tool. For what is it that we communicate when we speak? We communicate thoughts, but first we must think, and we think in concepts.
There are essentially three different views of concepts that have been proposed in the last few thousand years. First, there’s the intrinsic (realist/platonic) theory of concepts. In this view, concepts exist either in the things themselves, or they exist in some other dimension and the particular things are just instances of the concepts (i.e. a chair is an instance of the pure form Chair). Then there’s the subjective theory (yours), in which concepts are all in the mind. In this view, there is no “chairness” anywhere, and so a thing is a chair only because you or someone else calls it that. So if people start calling hamsters chairs, then a hamster is a chair. The third view (mine), is the objective theory of concepts, in which a concept is a relationship between the mind and the concrete things to which the concept refers. I urge you to read “Introduction to Objectivist Epistemology” for a good rundown of this theory of concepts, but in very brief terms it is this: A concept is a mental integration of two or more units possessing the same distinguishing characteristic(s), with their particular measurements omitted. In this context, “unit” is either a concept, or a percept (something perceived and integrated by the senses into a unit). So the process of concept-formation is a process of abstracting from experience.
That last sentence should tickle the programmer in you, at least a little. This is because the process of abstraction with regard to concepts is the exact same process that we use to come up with new abstractions in programming. In the paragraph that follows, I will use the term program to refer to any programmatic expression (or “object”, if you’re thus inclined).
As an example, think of types. What is a type, in programming? Some might say that a type is a pure Platonic form, and that programs are mere instances of these types. This would be the intrinsic view of types. But the objective view is that types are abstractions from programs. In other words, types are theories about programs, and specific programs are proofs of those theories (see Curry-Howard). By the same token, concepts are theories about reality, and the concrete things that inhabit reality constitute proof of those theories.
I’m not sure if “duck typing” represents the subjectivist view of types, but it’s certainly a satisfying analogy.
But I’m getting a little into the content of my next post. Let’s wait for that and then discuss.
It isn’t clear at all what people mean by “Object-Oriented”, even to themselves, and no amount of nudges and winks will make it so. Yes, I can dismiss their intention until and unless I see a reason to accept it.
“Concepts are not social conventions”
Wow. I understand that concepts are physical structures of a sort, but that is not what I think that you mean.
In it’s initial implementation, Object-Oriented programming was motivated by the desire to have compiler support for the construction of function tables.
In its mass-market implementation, Object-Oriented programming was justified through the idea of domain modeling, which would help to facilitate requirements discovery and negotiation.
In either case, it was a series of techniques adopted by a community looking to do work more efficiently.
Now 50 years on down the road, the methods and technologies have been expanded and morphed to deal with all kinds of practical implementation issues, to the point that the concept is now largely self-referential. Maybe that explains your difficulty.
Are you certain that in some not-to-distant future, the same won’t be true of “Functional Programming”?
And does it really matter? Why do we invest in concepts? Only because they have utility. We learn of utile concepts from others that have found them useful (nobody, not even Aristotle, invents everything from first principles). So the transmission and survival of concepts is largely a social phenomenon.
The same will definitely not happen with functional programming, because FP is a well-defined specific thing, like long division or differential calculus.
So you think that you do not understand what someone means when he uses the phrase ‘Object-Oriented’? You say that you have read countless of texts and had countless discussions in which you were just reading and talking gibberish? If so, then how could you make sense of those texts and discussions? And if not, then how did you determine what the ‘non-concept’ meant?
As an aside, you are right that I’m a subjectivist. I will try, as far as I am capable, to interpret your answer to questions above in the light of your epistemology.
I could not make sense of those texts and discussions, ultimately. That’s the whole point. The reason I wasn’t able to make sense of them is that they do not make sense, i.e. they do not integrate with other knowledge. A non-concept does not have a meaning. It is the sound of the voice of a parrot, signifying nothing.
I’m glad that you’re able to identify yourself as a subjectivist. The first step to recovery is recognising that you have a problem.
You can conclude with hindsight that they did not make sense, but I’m sure you didn’t feel that way at the time (or they would be very strange conversations)? How could you think you were having a sensible conversation if some words were completely devoid of meaning? Or if you do not consider them devoid of meaning, then I wonder how you see the relationship between concepts and the meaning of words.
Aside: subjectivism is an uncommon position for a physicist. I do not experience it as a problem :).
In hindsight, I liked the OO religion and I wanted it to make sense. I found the analogy between physical systems and software systems vaguely satisfying, as I’m sure others do. But feeling does not fact make. As an analogy, the fact that gods do not exist doesn’t preclude people from having seemingly coherent conversations about them.
As for the relationship between concepts and the meaning of words: Words are symbols that refer to concepts, and which words denote which concepts is largely arbitrary. They’re perceptual mnemonics used to recall the concept. I.e. you can’t grasp a concept just by learning the word, you have to observe concretes and abstract their similarities, omit their differences, and integrate them into a mental unit which is the concept. No matter how many times you’ve heard the word “dog” and had it explained to you, you can’t grasp what a dog *is* until you’ve experienced both dogs and a lot of things that aren’t dogs. And once you do form that concept, you realise that others have formed that concept as well, and that they use the word “dog” for it, so you decide to use that word too.
Aside: a common symptom of a mental problem is lack of insight into the fact that one has a problem.
I agree that coming up with a definition of OO that everybody agrees with is not easy. But, I figure just because e.g. Java is crap doesn’t mean Alan Kay is wrong. He coined the term, no?
If there are lots of people who get the definition of the concept wrong, that’s not to say there is actually no such concept. Just that a lot of people don’t know what the original real core correct concept really is. The hullabaloo about how OO is to emulate sundry objects in the real world is just misleading OOD trash somebody else came up with.
If every word would denote a concept, there cannot be a ‘non-concept’ for which there is a word. However, you assert that ‘object-oriented’ is a non-concept. The question then arises: what *does* such a word refer to? May be the concept of ‘unknown word’? But if I were to use unknown words, you would clearly understand that you didn’t understand what I was talking about and that wasn’t the case here.
You have had meaningful conversations in your life, in which you understood what the other meant, even though he used wrong, even non-existing, words. The wrong word can denote the right concept, because you are capable of associating the word with the concept. Not because the words sounds alike, look alike or have similar meanings, but because you are able to ascribe meaning to words based on the context.
The danger that I referred to earlier is that you will tempt yourself and others to dismiss what someone says, solely because you have declared one of the terms that he uses to express himself as meaningless. Meaning does not get lost when a single word does not have a clearly defined meaning.
On the subjectivism: I don’t think I suffer from more problems than objectivists and those in between. We’re all walking tightropes.
Don’t be ridiculous, not every word denotes a concept. For example, the words “slebfert”, “fraxmond”, “numberwang”, and the hyphenated phrase “object-oriented”.
I’ll take my chances demanding that people know what they mean by what they say, danger be damned.
If some words do not denote concepts, then what do they denote? They must denote something, otherwise you couldn’t make sense of the sentence in which they are used. If someone says “The snow was grue”, then you usually guess that he was either trying to say that the snow was green or that the snow was blue. How could you possible infer that correctly if ‘grue’ did not denote any concept?
I won’t engage in analyzing a folly. What you’re asking is either proof of a negative, or the rational consideration of the arbitrary. Either one of those is a logical error. Philosophizing is to philosophy as rationalization is to reason.
You’ve never heard someone make slip of the tongue, putting in a completely different word than he intended, while still understanding what he meant?
Well, let me put it to you bluntly then, to repay ‘ridiculous’ and ‘folly’ in kind: you are sweeping the complexities of language and meaning under the rug, seemingly in order to be able to dismiss the term ‘object-oriented’ and anyone that uses it. The assertion that a word needs to denote a single concept is preposterous, as is the implicit assertion that _you_ have managed to do what men like Kripke and Putnam failed to do: come up with a theory of meaning that isn’t riddled with problems. Every single utterance calls upon your conception of ‘language’ of ‘speech’. An utterance ‘red’ does not only call upon your conception of ‘red’, but also of ‘color’, ‘light’ and, depending on the exact colorshade that is meant ‘blood’, ‘coral’ or, if you are colorblind, ‘that shade of grey’.
I offer you two words for contemplation: ‘context’ and ‘Putnam’.
I haven’t asserted that a word needs to denote a single concept. It can denote several, or none. Nor have I asserted that I’ve come up with a “theory of meaning”. I have, however, come across Objectivist epistemology. You should check it out.
It’s not our job to try to infer what people mean by their “utterances”. It’s up to them to first demonstrate that they understand what they’re saying, before we attempt to do that for them.
That’s about all I’m going to say about this for now.
In a fully or partly objectivist epistemology, you still cannot have words that are ‘non-concepts’. Every word invokes concepts, such as ‘unknown word that I can look up in an English dictionary, ‘unknown word whose meaning I don’t understand’ or ‘word with which the user could denote several possible concepts’. Only when you fail to recognize it as a word at all, could you say an utterance is a ‘non-concept’. Even then, you may understand what someone is conveying.
“It’s not our job”…. it’s what you do, every time you communicate! Daniel Dennett once wrote that a philosopher should not deny what he experiences. He should be very well aware of what he experiences and make sure it is accounted for. You cannot deny that every day, you infer what people mean. Language is (necessarily) vague: that is what gives it such power and the possibility to convey richness and shades of meaning. The theory of meaning you built your case upon is naive and too simple to have anything to do with reality. It is a toy model.
Moreover, declaring a central term in the vocabulary of people you are trying to convince to be a ‘non-concept’ is a sure way of crippling the dialogue. You have to have extremely good arguments before you do such a thing, lest you end up in discussions like these, where we stray far away from the point you were trying to make. If you are really interested in ridding the world of unclear, confusion talk about ‘object-oriented’ things, then declaring the term a ‘non-concept’ is the wrong way to go at it.
Really, I am quite sympathetic to what you set out to do and the reason for this criticism is that I wish you to succeed.
Are you aware that concept formation and categorisation are amenable to experimental investigation, that the experiments have been done and that the Aristotelean model turns out to have almost no connection to what people actually do when they understand the world?
Sure. The most significant papers are collected in “Concepts: Core Readings”, Margolis and Laurence (editors), MIT Press, 1999 which has been digitised by google books.
I just stumbled onto this blog/post/discussion and see that you’ve touched on some of the things I started writing about in 2006 when first learning Philosophy 101 (at age 50 after 30+ years of software engineering).
Your audience may be interested in http://ExistentialProgramming.com/ (although with the caveat that the early years were just transcribed notebook entries and I was just starting to learn western philosophy….but I think the blog entries have gotten better over time). :-)
Yeah, just don’t even try. To put links in comments, just paste the link.
Hi Rúnar,
Your conclusion, that “object-oriented programming does not exist,” can be easily disproven within the narrow construct of your argument. I’m not sure I agree with the way you have set up the question, but I’m willing to work with it.
There is one defining and essential concept that is present in every object-oriented or object-based system. This idea is variously called “dynamic dispatch”, “message passing”, “virtual method invocation”, “polymorphism”, or even “dynamic binding”. Some of these words have extra meanings or connotations (especially “message” and “polymorphism”) but that does not prevent them from having precise meanings in the context of specific systems / languages / papers. You could arrive at concept inductively by examining OO languages and systems (criteria #1). It is also essential (criteria #2). Thus it meets your criteria for concept formation. You yourself acknowledge this definition, when you say:
“In the case of Smalltalk, for example, it is closed over late-bound lambda expressions with an implicit “self” argument. This is fairly precise, so I’m willing to concede that if there’s anything to be called OO, then it’s merely this. In that case it seems to boil down to late vs. early binding, and a.f(b) instead of f(a, b).”
The difference between a.f(b) and f(a, b) is quite significant, not just a matter of “which end to open the egg on”. The connection is related to the duality of algebra and co-algebra, or programming with observations rather than pattern matching. Later several people mention the idea of “first-class modules” which would also count as object-oriented. “Dynamic dispatch” is exactly the essential characteristic that I discuss in my essay, which was cited in the previous comment by Eric. The idea is that objects are collections of first-class functions/procedures. This obviously generalizes functional programming. If that was all it was, then it would not be interesting. The focus of my essay is to point out that the interesting thing about object-oriented programming is the use of (collections of) functions to represent data. OO eschews use algebraic data types, with constructors and pattern matching, because they are fundamentally closed and not extensible. The OO approach optimizes openness and extensibility, at the cost of concision and optimization. Perhaps its not an accident that the reference to my essay ended the discussion.
The fact that many people misunderstand object-oriented programming and want it to be more than it is does not cause the concept to vanish. If so, then “functional programming” would also not exist. Like you, I prefer definitions that have technical content and are not just social conventions. The idea that objects model the real world is not fundamental, it was added later. Inheritance also has a precise technical definition (which is different from normal composition) but it is neither essential to OO or unique to it, as many functional programs use inheritance (where it is called “open recursion”). Even subtyping and imperative effects are not essential. Inheritance, subtyping, and imperative updates are very useful, but not absolutely essential.
There should not be any fundamental conflict between OO and functional styles. They both exist and have benefits and drawbacks. They are just different ways of modeling computation, and they both exist happily within various versions of the lambda-calculus, if you care to look for them.
Since we are both in Austin, perhaps we can get together sometime to discuss? I’m easy to find… just google “William Cook”.
“The focus of my essay is to point out that the interesting thing about object-oriented programming is the use of (collections of) functions to represent data. OO eschews use algebraic data types, with constructors and pattern matching, because they are fundamentally closed and not extensible. The OO approach optimizes openness and extensibility, at the cost of concision and optimization.”
The problem here is that the *set of functions* is closed and not extensible; it seems to me that your claim of openness here runs straight into the expression problem – you can either have an open set of types and a closed set of operations, or an open set of operations and closed set of types. In any nontrivial project, we will tend to need both at some point but will have to choose one and have some hacky workaround for the other.
Yes, you are right that there are different dimensions of extensibility. Just to set the record straight, the “expression problem” was named by Wadler, but I was the first to formalize it thoroughly, and it was originally discovered by John Reynolds. See “Object-oriented programming versus abstract data types” on my publication page http://www.cs.utexas.edu/~wcook/publications.htm
The deeper issue is that depending on what kind of system you are building, different kinds of extensibility matter more than others. For building GUI applications, window systems, file systems, and such things, it is often more important to create new *kinds* of objects that implement standard interfaces, than it is to add new methods to existing data types. This is the reason why OO enabled the creation of the modern desktop and operating systems. If you are doing compilers or type systems, then it might be more useful to add functions to an existing data type. Another point is that OO has, out of the box, a better overall story for extensibility: objects support data extensibility directly, and inheritance can be used to extend objects with more functions. With pattern matching on disjoint sums, it is quite difficult to extend the data type. More recent solutions to the Expression Problem tend to combine OO and functional ideas, and we haven’t yet seen the end of research on this topic.
William,
I have read your paper and I think it’s good and thought-provoking. I tend to disagree that the “open data types” approach optimizes openness, since then the operations are necessarily closed (Wadler’s expression problem), but we’ll put that to the side. I absolutely agree that there is no conflict between FP and OO, and there has never been one.
The kind of first-class module system that you describe is generally what I’ll refer to as a system having “objects”. But then again it’s not entirely clear to me what the difference is between that and polymorphic functions. For example, Haskell has a type-class facility that lets you classify types according to the operations they support. So for example, the type of the + function is forall n. Num n => n -> n -> n. It’s not decided until an argument is passed which + function to call. At that time, an appropriate Num dictionary is passed and the appropriate implementation is dispatched. So Haskell has that kind of dynamic dispatch. However, you cannot manipulate these dictionaries as first-class values in the language. Are they “objects”? Is Haskell now an “object-oriented” language? Or are these first-class semantics required? You could very well implement the same kind of type-class mechanism with a record-based language like MLPolyR, and that definitely would let you manipulate the dictionaries as first-class values. Would you then be doing object-oriented programming? It’s not at all clear.
It seems to be the rule that if you feel you’re doing object-oriented programming, then you are.
In normal/typical uses of Haskell type-class, the dictionaries are not first-class. They are bound statically at compile time. As a result, they are NOT objects. It is possible to use an existential type to make the type-class dictionary be first-class, but this is non-standard. But the result does work more like an object. There is some discussion of this here, but I’m not sure it is definitive: http://www.haskell.org/haskellwiki/OOP_vs_type_classes
It is possible to create objects in almost every functional language, including Haskell, ML and Lisp/Scheme. There are many papers and libraries that show how to do this. This does NOT make these languages be “object-oriented”. This example illustrates the true meaning of “object-oriented”: it is a language/system that has special support for creating and using objects. This kind of extra support includes subtyping, inheritance, tooling with IDEs, etc. Similarly functional languages have special support for making functional programming useful, for example Haskell’s “do” notation. It is possible to write pure functional programs in most OO languages, but that does not make such languages be “functional languages”.
Interesting post. I’m eager to read more.
Have you heard of the “Treaty of Orlando”? It’s a consensus on different mechanisms for sharing behavior in OO languages.
I’ve made this point before and gotten a different definition from every person who tried to argue that there is a definition for “Object-oriented.”
Yes, I’m very aware of the Treaty of Orlando. I was a PhD student with Lynn Stein at Brown the time, and consulted with her. The Treaty focuses on the relationship between delegation versus inheritance and prototypes versus classes. In the end the two mechanisms are closely related, but they just change the time at which binding is specified. Prototypes/delegation works at runtime, while classes/inheritance are more static. Both approaches use “objects” as defined above.
As Keith Braithwaite pointed out, the idea that a category requires that all elements share common properties, none of which can be omitted, is not borne out by empirical evidence. In addition to his references, see /Women, Fire, and Dangerous Things/. It adds other examples to Wittgenstein’s “game”. For example: “Is the pope a bachelor?” The answer from most people is “yes, but…” and people qualify that answer for sound reasons: doing so communicates.
Categories as we use them typically have central elements, ones that are more X-like than others: For people of a certain age, Hugh Hefner was the prototypical bachelor. Categories also have fuzzy boundaries: the pope is near such a boundary. A widower is another less-prototypical case: if we refer to a widower as a bachelor, that’s both a true statement and something we might need to elaborate on to avoid confusing our co-respondent.
It’s also useful to try (I have!) to come up with a definition of “chair” that distinguishes them from “stools”. (Note that many bar stools have arms and a back.) Don’t forget the existence of bean-bag chairs. Now, you might be tempted to say that we are all fools and that bean-bag chairs aren’t chairs at all. But how can you justify that with induction? On what do you base the induction to what a “chair” is, if not on what people point to when they refer to chairs?
Which gets to an answer to this of yours:
If it is not the case that things called “game” have a shared
characteristic, then why do we use that term? What could it
possibly mean? Why is there even such a word in the first
place?
I’m a fan of the American Pragmatism of Pierce, James, and Dewey. It was born in part as a philosophical reaction to evolution (see /The Metaphysical Club/ for an OK history), and it emphasizes that we humans are beings in a world that we constantly act to change, with greater or lesser success. They thought that a philosophy that didn’t grapple with that must be sterile. A Pragmatist’s answer to “Why is there even such a word in the first place?” would be something like “Because it enables us human beings to achieve our goals in the world — which world, for humans, is not just physical but also intensely social — well enough. We don’t have a precise definition for ‘game’ because that would serve almost no one. It’s like the reason our eyes are fooled by optical illusions: getting better at not being fooled didn’t confer any advantage.”
William James’s /Pragmatism/ is a good early introduction. Menand’s /Pragmatism: A Reader/ is decent too, especially because it gives some exposure to Pierce, who doesn’t get enough.
Austin’s /How to Do Things with Words/ fleshes out the picture. He points out that not all utterances are propositional (“The cat is on the mat.”, “A chair is…”). Many are also what he calls “performative” in that they change the world (“I now pronounce you man and wife.”). Some have argued that all language has a performative component; that utterances are uttered so as to change something in the world. (As the poet and programmer Richard P. Gabriel has put it, “a poem is a program that executes in the reader’s brain.) A word is a tool. We don’t judge a hammer according to some platonic ideal of hammer-ness; we judge it by whether it serves its purpose. Similarly, we oughtn’t judge “game”, “chair”, or “object-oriented programming” by the precision of their definitions, but by whether they serve our purposes.
P.S. You mention somewhere that species are identified by inductively discovering common traits. I recommend looking into cladistics. I believe it’s still pretty dominant in the taxonomy that actual biologists do. Story: back in the early-ish days of object-oriented programming, I read a piece in The Economist about cladistics, and looked up someone in the University of Illinois biology department, all fired up to see how cladistic techniques might be applied to what (even then) seemed to me the unworkably rigid categorizations of class-based languages. When I emailed him asking to set up a meeting, he declined by saying that he got someone from computer science contacting him every couple of years, and it was always a bust, because they just weren’t prepared for the messiness of the world taxonomists have to deal with.
I’m still waiting for Runar to define the essential characteristics of a “game”. Should be a straightforward task — or do they not exist, either?
Scroll up a bit.
it seems you asserted that there were essential characteristics of a “game” but perhaps a fuller answer would not quite fit in the margin.
Here are four games to think about:
chess, a game of catch, military ‘war games’, and solitaire.
OK, so how do you know those things are games? Is that established a priori?
That is a rhetorical question, by the way. This question of defining “game” is a red herring.
If I can quote you:
“But try as you might, there’s no objective definition of what ‘object-oriented’ refers to.”
And I suppose the same goes for “game” (after first claiming there was an objective definition, you appear now to have changed your mind). I hardly see how that’s a “red herring”, it seems rather an important point about the nature of language and meaning.
I’m just trying to point out that you seem to already know a lot of games. So there has to be some characteristic (or multiple possibly non-overlapping sets of characteristics) they have in common. Otherwise, why would you refer to them with the same term? The answer is that it serves a cognitive utility. There is a difference between saying “this is a game” and “this is not a game”. There are some attributes that you and I find convenient to refer to that way, but the use is highly contextual and fluid. But “object-oriented” is not that way at all. There is no useful distinction at all between programming that is object-oriented and programming that is not. It is a mere shift in perspective of the programmer.
Right, and “object-oriented” must also serve a cognitive utility — otherwise, why would the term exist? Evidently, even you find the word useful to describe a “shift in perspective” on the part of the programmer. Is that an “object-oriented perspective”?
The term exists to make a distinction without a difference.
I don’t have anything else to say about this, so feel free to get in the last word.
I am happy to accept your final, resplendently incoherent answer, that the term “object-oriented” exists for the precise purpose of making a distinction without a difference.
You’ve just roamed thru most of the topics I’ve blogged about the last few years (ExistentialProgramming.com) …like most philosophical discussions, it all comes down to definitions, and definitions depend on your purpose for asking…see this post about science using 20+ competing definitions of “species” and which you pick depends on why you want to know… http://existentialprogramming.blogspot.com/2010/05/purpose-of-thing-is-in-us-as-well-as-in.html
Ever since reading Cook’s paper, I’ve been hoping for some better resolution between OO-impressed and OO-unimpressed camps. The definition of “objects” /is/ unfortunately too easily reduced to a semantic debate.
After reading Cook’s paper, I started using the term Simulation-oriented programming to refer to what seems to be attacked by posts like this one and Kiselyov’s (http://okmij.org/ftp/Computation/Subtyping/). These posts often have arguments that seem to presume a coupling between objects and features like inheritance and mutable state. I try not to use the term “object orientation” to avoid confusion. But when I say “object” these days, I’m find myself talking about them under the constraints Cook defines. Although the Cook’s definition for objects may not be commonly accepted, I prefer it for its apparent 1) consistency with prior art 2) concreteness 3) minimalism 4) elucidation of the duality between ADTs and objects. This discussion of this duality seems much more meaningful than quibbling over the clearly false dichotomy between a never-well-defined “OO” and the sometimes-misunderstood “FP.” I really like that under Cook’s terms, untyped lambdas are /both/ objects /and/ pure functions, which makes this distinction between “OO” and “FP” seem particularly uninteresting. Comparing objects to ADTs seems more to the point and less intellectually evaporative.
However, I’m far from an authority on such things. I’m really uninterested in baseless proofs of the existence of something as vague as “object-orientation.” I’m not even really interested in deconstructing Cardelli’s object calculi, which is far from minimal and also seems to miss a lot of essence of the formalized abstractions. Instead, I’d like to see more discussion about Cook’s comparison of the pattern matching of ADTs to the autognosticism of objects. I’m also interested in arguments for or against looking at objects as coalgebras. Even if we don’t call these coalgebras “objects,” the coalgebra should continue to exist.
Albeit, programmers may disagree over the particulars of OO programming, at least one thing it has meant to me aside from the particulars (encapsulation, polymorphism, etc.) is that the abstraction should bind the methods and the data to the object the program is representing. This is different from functional programming where the methods would exist separate from the data structures on which they operate.
Another thing to understand when programming is that a programmer builds up some mental model around which to think about the problem he’s addressing. That mental model does affect how he constructs and reasons about software development.
The OO paradigm does closely correlate to how people think about things in an abstract sense. Even before I gained years of programming experience, when I first was introduced to the OO paradigm, it felt natural. It wasn’t much of a leap to make to think it terms of properties and methods bound to the objects to which they belonged.
I have since retrained my brain to think think in more functional terms, and even as I prefer the paradigm it does present a less natural mental model. Because of familiarity and experience I handle it fine, but I handle it as an experienced programmer. I can’t imagine that a layperson would conceptually reason about a problem using a mental model more closely akin to functional programming than to object oriented programming.
There is always some dissonance between the actual code abstractions and the mental model one uses to reason about real-world problems inside the vacuum of a computer. I believe that the average person/programmer will have an easier time understanding and maintaining code when that dissonance is reduced. That is, the closer the code can reflect the mental model in which a person thinks about a problem, the easier that code will be to understand and maintain. Again, I refer to average programmers as it may take a great deal of conditioning for a person to pick up the functional paradigm/way of thinking.
Very true, but by the same logic, there is no such thing as “functional programming”. Sure, you can come up with a definition for “functional” just like you can come up with one for “object-oriented”, but it certainly wouldn’t stick in every situation (just as “object-oriented” fails to).
I think we could strengthen your claim a little bit by saying that “there is no such thing as a language paradigm”. Functional, object-oriented, logical; they’re all just arbitrary labels that we plaster onto nebulous concepts because our puny minds can’t hold sand with a spatula.
My point is that claiming that “object-oriented” *specifically* is a non-entity is just a little bit inflammatory. Very true, but it’ll ruffle some feathers.
Daniel,
I use “functional programming” strictly to refer to programming with mathematical functions. It’s entirely clear when you have stopped using functions, which is when you introduce a side-effect such that your program is not referentially transparent. I’ve seen the term “functional programming” used for all kinds of things which are not that, but I think such use only adds confusion.
I’m not sure I agree that “there’s no language paradigm”. There is declarative style as against imperative style, and typed calculi as opposed to untyped ones, etc.
Attributing “not even false” to Tony Morris? I thought “not even wrong” should be credited to Wolfgang Pauli.
duffymo, it really should be credited to Aristotle, but I first heard that particular way of describing the arbitrary from Tony.
Citation for Aristotle?
Aristotle doesn’t use those exact words, but calls such things “absurd”. See Metaphysics under the discussion of the law of the excluded middle. Aristotle makes a distinction between a thing being or not being on the one hand, and a proposition being true or false on the other hand, implying that propositions can be neither true nor false insofar as they don’t refer to things that _are_. Also see “On Interpretation” for discussion of propositions that are neither true nor false, and the last sentence from part 6 of the Physics:
“If it contains in the case of sensible things, in the case of intelligible things the great and the small ought to contain them. But it is absurd and impossible to suppose that the unknowable and indeterminate should contain and determine.”
Incidentally, Aristotle also talks about “the impossibility of the void” in the Physics, and identifies the fallacy of the reification of zero. My objection to the term “object-oriented” in the English language is essentially the same as my objection to a NULL term in programming languages, which I believe to stem from the same fallacy. See my post “Noumenal Null”.
@Apocalisp
I tend to agree with your definition of “functional programming”. However, that doesn’t mean that it is the correct one. Much like “object-oriented”, there is no consensus on what it means. Must a functional language be lazy (Haskell) or strictly evaluated (ML)? What about monads? Does the type system have to guarantee all of this, or can a functional language by dynamically typed (lambda calculus)? Can a language be “partially-functional” (Scala)?
What’s worse is “functional” often seems to encompass a number of features which are merely outgrowths of pure-functional languages (think: closures). While I tend to think that “object-oriented” in a far more ambiguous position, this is probably due to the fact that people using functional languages tend to have some familiarity with language formalization.
Daniel,
Since we agree on the definition of “functional programming”, we may refer to this definition whenever we speak of it. But note that concepts are not social conventions. They are tools of cognition. Being correct doesn’t imply that anyone agrees, or that there’s a consensus. What I call “functional programming” you may label anything you want, as long as when we speak of it to each other, we have an objective definition to which to refer. By “objective”, I don’t mean “neutral”. I mean “consistent with the specific facts under consideration”. See here for an objective definition of objectivity: http://aynrandlexicon.com/lexicon/objectivity.html
I’m not sure what meaning to attach to “functional language”. I usually say “purely functional language” if I’m talking about a language that enforces referential transparency.
Thanks for the post. Very interesting. I look forward to your follow-up articles.
Your “OO doesn’t meet criteria #1” point is very convincing. As professional developers / engineers we should learn to deal with problems at the correct level of abstraction, without requiring the training wheels of physical analogy.
That said, while I’m about 90% in agreement with you, I’m not yet convinced that the Emperor has no clothes. I agree that the term has become a poorly defined grab-bag. But despite the ambiguity, there are still techniques and strategies that most people would consider “OO”. When we think of these as tools to be used where appropriate (and not religiously), then they still have a place in every programmer’s repertoire, IMO.
To give some examples of how “OO” techniques that I’ve personally found very profound:
– “Everything’s an object” – as embodied by languages such as Python or Lisp. Allows us to work with “entities” at a level above hardware defined types, and to treat them consistently.
– Message Passing – I think that learning to divide a program into entities that communicate by passing messages can be a very good way to structure certain types of problems.
Perhaps your follow-up articles will be enough to convince me that both ideas are really just silhouttes of some more profound underlying concept(s).
Anyway, good article.
Matt,
If every thing is an object, and every object is a thing, then thing = object. Of course, this is not what people mean when they say that. What they really mean is implied by “treat them consistently”, which is more precise. What you mean is that the algebra is closed. In the case of Smalltalk, for example, it is closed over late-bound lambda expressions with an implicit “self” argument. This is fairly precise, so I’m willing to concede that if there’s anything to be called OO, then it’s merely this. In that case it seems to boil down to late vs. early binding, and
a.f(b)
instead off(a, b).
Reminiscent of the argument of whether to open eggs on the big end or the little end.I agree that message passing models are interesting and useful, for example the “actors model”, which provides a perspective on concurrent programming with side-effects, and of course the pi-calculus.
I’m interested in what books and references you found to be useful when learning the fundamentals as you put it.
Channing,
Certainly too many to enumerate. The best resource has been seeking out people who are smarter than I am, made easier now that we have the internet. That said, some great eye-openers include, in no particular order:
Structure and Interpretation of Computer Programs by Abelson and Sussman.
Databases, Types and the Relational Model by Date and Darwen
Introduction to Objectivist Epistemology by Ayn Rand
Anything at all by Aristotle
Why dependent types matter by Thorsten Altenkirch
A Gentle Introduction to Haskell by Hudak, Peterson, and Fasel
Divine Proportions: Rational Trigonometry to Universal Geometry by N J Wildberger
Fluid Concepts and Creative Analogies by Douglas Hoffstadter
Once the course is set, it becomes necessary to fill the void with a good treatise on logic, such as “Logic: An Introduction” by H W B Joseph; a good overview of philosophy, such as “A History of Philosophy” by Wilhelm Windelband; a good discrete maths book; and a categories introduction such as “Conceptual Mathematics” by Lawvere and Schanuel.
Thanks for that list – I have a few of those already which is encouraging.
Apocalisp,
I think you’re correct that “everything is an object” = closed algebra. And I specifically meant late-binding. I would have defined it less precisely as: “everything” (data, environment, functions, code tree, etc.) is an accessible runtime “object”.
In my experience, this encourages some profoundly different programming techniques when compared with languages like C or Java. And with a language like Haskell, it’s something entirely different again. So I don’t know how to classify these techniques as anything other than “OO”.
Also, I don’t think the choice of a.f(b) or f(a,b) is quite as specious as you suggest. Even Abelson and Sussman made the distinction in SICP where they say about message passing:
“An alternative implementation strategy is to decompose the table into columns and, instead of using ‘intelligent operations’ that dispatch on data types, to work with ‘intelligent data objects’ that dispatch on operation names”
Now, syntactically this would always be (f a b) – since the book uses Scheme. But the difference in how you conceptualize the runtime behavior is quite dramatic, IMO.
:)To restate my position (hopefully not beating you over the head). I think that despite the ambiguity of the term “OOP” and the vast amounts of noise and misinformation, there really is some “meat on them bones”, in the sense that certain techniques that most people would agree are “OO” (even if they couldn’t tell you why), provide useful tools for creating and reasoning about programs.
Still, great article, and thanks for the discussion.
Matt,
Instead of classifying those techniques as “OO”, I might suggest classifying them as having “first-class semantics”. Somebody please chime in if you know the formal name for this kind of language design approach. For example, in LISP, every lexical construct is a first-class entity in the language itself (code is data is code). The same goes for Python and Smalltalk.
“Intelligent data objects that dispatch on operation names” says to me: “first-class modules”.
In the end it’s all binary. No objects, no functions, no statements, no procedures, just 1’s and 0’s and a computer processor. The purpose of object-oriented programming is to make it easier to code; it’s to make the processor do what you want it to do (move electrons if you want to be specific). Easier coding is the idea behind different “levels” of programming languages. It’s why you’ll hear C++ described as a “high-level language” and machine code as a “low-level” language.
Object-oriented programming is a programming technique, not an abstraction. There’s no disorientation to be had. “Object-orientation” is a “valid concept” as it is a useful tool for creating sequences of 1’s and 0’s which are then processed by the microprocessor. It is combination of functions and variables, which is useful to the programmer. It is not an “abstraction that is made in error” it is a tool made in earnest. Would you say functions are “invalid concepts” because they are just specialized subroutines? Stacks are out as they are merely specialized arrays?
James,
I’m not sure if your attempt at empiricism is meant as satire, but it is refreshing in an endless sea of rationalists. However, I think you’re wrong on several accounts, as empiricists necessarily are. In fact, your argument applies at every conceivable level of detail, and is equally wrong at each one. So you might say it’s “fractally wrong”.
1. Programming has very little to do with computers. To paraphrase Edsgar Dijkstra: “programming is as much about computers as astronomy is about telescopes.” It isn’t in the end all necessarily binary. To wit, you can take a simple program and execute it by hand, on paper, or in your mind. The nature of the physical machine isn’t accessible at the level of programming, and it’s not necessary to know it. The level of abstraction provided by programming languages omits this fact, because it is an abstraction. There must be some process that the programming directs, but it can be any process, not necessarily a binary one. That is the nature of abstraction. On the other hand, following your empiricist argument to its absurd reduction, there are no ones and zeroes, there are just electrons anyway, so nothing is really binary, and there are no computer processors, just lumps of atoms in an elaborate configuration. There are then no thoughts either, just firings of neurons, which in turn don’t really exist since they’re just lumps of organic matter which is again just a configuration of atoms.
2. “The purpose of object-oriented programming is to make it easier to code” presupposes that you and I both know what you mean by “object-oriented programming”. I don’t, and I don’t think you do either. But you’re on to something, which is the antidote to your empiricist disposition: The purpose of abstraction is to make it easier to think. Higher levels of abstraction let us think about things in more general terms. A “low-level language” such as assembly (and, to a degree, C) helps us think in terms of a specific machine. A “high-level language”, such as Haskell or LISP, lets us think in terms of any machine. And a yet higher-level language, such as pi-calculus, lets us think in terms of the governing dynamics of processes.
3. Functions are not “specialised subroutines”. The concept of a function is centuries older. A subroutine is one way to implement a function. Nor are stacks “specialised arrays”. An array is one of many possible ways of representing a stack. Here again is the principle of abstraction, of the some-but-any. A function is evaluated by some but any process. A stack requires some but any memory structure to hold the elements on the stack. See “Purely Functional Data Structures” by Chris Okasaki for example.
Oh I get it. The universe is a computer program. “Objects,” as you understand them, are proposed entities within that universe. However, they are non-real and are therefore analogous to mysticism. In other words, you think we live in The Matrix, and your helpful advice to Neo would be to think of Agent Smith not as human, nor as an object in an object-oriented program, but as a set of procedures. Neo would then say “Whoa, dude!” and facepalm, because he would realize if he had thought of them as object-oriented he might as well be a religious fundamentalist believing in invalid concepts. He takes your advice and pushes all thought of object-orientation out of his head. He then becomes disoriented and confused while trying to figure out how Agent Smith’s arm is moving in the Matrix with respect to the taste of orange juice in the Matrix, and is killed. Even more tragically, it turns out The Matrix was programmed in Java, and thinking about Agent Smith as an instance of public class AgentSmith, would have made it a lot easier to understand than trying to figure out how the orange juice procedure affected Agent Smith’s fisting speed.
There’s some satire for ya
James,
If I understand all that cryptic blather correctly after peeling off the layers of sarcasm, you’re accusing me of rationalism in opposing your empiricism. That accusation is unwarranted, since it’s a false dichotomy. The choice is not either reality with no recourse to concepts (Empiricism, Materialism) or concepts with no recourse to reality (Rationalism, “The Matrix”, Plato). The former means, ultimately, that there’s no such thing as consciousness. The second, ultimately, means that there’s no such thing as reality. At the heart of this false dichotomy is another: the mind-body dichotomy, or the “analytic-synthetic distinction”. I reject this distinction and uphold a third view, Objectivism, in which concepts are objective, i.e. not independent of reality, but abstractions made from the facts of the material world by an entity of a specific kind, for the very reason that it is not omniscient or infallible: a human mind.
If that’s not what you meant, then I’m afraid you’ve lost me. Further “Matrix” references will not be helpful because I only saw the first one and I didn’t like it.
I don’t see how your Randism factors into object-oriented programming, despite the word “object” which I hope is not where your argument stems from. The concept of objected-oriented programming has nothing to do with existentialist metaphysics.
This, I think, sums up your error:
The term “object-oriented” is at least honest in that it says what it implies, which is a frame of mind, an orientation of the programmer
The term object-oriented is not a frame of mind, or an orientation of the programmer. It’s a framework of a program, an orientation of the program. Program, not programmer. It is separate and distinct from the mind of the programmer. The programmer may choose to create a program using this framework for whatever reason they want.
James,
The term “object-oriented programming” has nothing to do with anything. There isn’t any such thing (except, perhaps, as commented by Matt; see above). If you have evidence to the contrary, please supply a succinct definition with a genus and differentia, and an example or two so that the rest of us may validate the concept by repeating the induction that led you to it. So far all I’m getting is that you know it intuitively or you heard it from somebody else. In that case, maybe you can tell me what “numberwang” is. I hear that 4 is numberwang, but 5 isn’t. Or maybe it was the other way round.
I don’t think “object-oriented” has anything to do with objectivity or Objectivism, though they have a common word stem there. I don’t know, ask a linguistic analyst, but I think that’s a red herring.
Ok object-oriented programming is a programming style(genus) in which both functions and variables are unique to a hierarchial referent(differentiae). How’s that?
James,
Can I understand this as saying that any programming with mutable first-class modules is object-oriented programming? It’s very similar to what Matt was saying earlier in that case. I’ll consider that in my second follow-up post, on the distinction between values and variables. Thanks.
Ok I looked up what mutable first-class modules means, and yes basically I would say that. (I think I would include constant objects, immutable objects, as well). So, basically, first-class-modules-oriented programming = object-oriented programming. To me, programs are binary flowing through logic gates. Everything else is programming style which is merely intended to help the programmer produce the necessary logic for the input. I look forward to your follow-up…
I somewhat agree: there’s no such thing as “object oriented programming” at language level.
Of course, there is object oriented analysis and and object oriented design :).
If I understand correctly, you’re equating concept with theory, that’s very powerful but I don’t think is the everyday use of “concept” and maybe causes some confusion.
IIRC, Bertrand Meyer’s “Object-Oriented Software Construction” has a pretty solid definition the OO theory. (I didn’t read it, but Brian Cantwell Smith’s “On the Origin of Objects” seems to cover it from the metaphysical point of view).
I would not give much weight to Paul Graham, his accumulator definition in haskell shows either ignorance or dishonesty.
OOP doesn’t have “nice” properties and few warranties from the prog. lang. POV (aka “OOP sucks”), but modeling the computing space abstractions based in the domain space abstractions proved to be very powerful.
Gabriel,
I’m not convinced about the existence of “object-oriented analysis and design”, in the sense that such a thing is somehow separate and different from analysis and design that isn’t “object-oriented”. But I don’t know. It seems arbitrary anyway, like a ritual with an accidental practical benefit (medieval physicians using leeches comes to mind). I have read some of Bertand Meyer’s book (many years ago), and I think he is brilliant, but he seems to be rationalizing “OO” rather than providing the method for its discovery. That is, he seems to be patching an a priori notion of “OO” so that it achieves modular software construction. I will have to read the book to convince myself either way. Brian Cantwell Smith looks interesting. Thanks for that recommendation.
How can we program without polymorphism and inheritance and also having Object-Oriented programming?
Carfield,
Like I say, it’s a loose grab bag, so you can omit very well anything you want and still call it Object-Oriented if that floats your boat. That said, you can readily do away with “inheritance” by using composition. As for polymorphism, well, many languages employ polymorphism and wouldn’t be called Object-Oriented by any stretch.
See “What Does Substitutability Really Mean?” by Chris Date for a good discussion on the matter of subtyping and polymorphism (from a decidedly non-OO point of view, granted).
Any serious discussion on the subject would have to incorporate Luca Cardelli’s Theory of Objects into the mix. From a theoretical standpoint, object oriented programming is based modeled via co-algebras.
Cardelli and Wegner have classified four types of polymorphism: Parametric, Inclusion, Overloading, and Coercion. A language like Haskell does not have inclusion (subtype, subclass) polymorphism but has Parametric (generics) and Overloading (type classes).
The typical idea of object oriented programming incorporates three tenets:
1. Encapsulation
2. Inheritance
3. Subtype Polymophism
These things are possible in practically any PL but an OO Programming Language would be one that actively facilitates this model.
Chris,
Good comment, thanks. I did have a brief look at “A Theory of Objects” before writing the post. The essential unanswered question is: what is that theory for? Why would one need it, or arrive at it, if one did not already have a notion of object-oriented programming? It’s all very formal, but seems arbitrary.
I think that Inclusion (subtype polymorphism) and Coercion significantly overlap. Consider a language that supports implicits. If there exists an implicit function (a:A -> b:B) in a given context, then A logically implies B in that context. It’s not clear to me how that differs from A being a subtype of B (in context). The crucial notion here is substitutability (see What Does Substitutability Really Mean? by Chris Date for an excellent discussion).
That’s a nice link! I haven’t seen that before, and that underscores my point, that there isn’t any special kind of programming that is OO (admittedly, picking on OO specifically is a little unfair). Programming, abstractly, essentially, is done in the universal language of logic. Euphemisms serve only to shield programmers from this fact. Writing code in a specific programming language (or a specific kind of programming language) is (or ought to be, rather) the last step, in which one translates the language(s) of one’s program into the terms of the chosen programming language.
So if we want to get a head start on the value-vs-variable distinction, where do we go?
I’m reading the Chris Date paper on substitutability, and he makes continual reference to it, none of them happen to contain any links.
Ok, part 6 [http://www.dbdebunk.com/page/page/622347.htm] of Data’s paper gives some hints on value-vs-variable.
Value = immutable, variable = mutable? That’s all?
“Valid concepts are arrived at by induction.”
What?
That’s about the silliest thing I’ve ever heard. The world is full of concepts which are not logically induced. Like “liberal” and “conservative”, for example. Thinking about everything in the world though mathematical terms is a cognitive defect. If you tried you’d find determining if any particular concept is valid would be NP complete. Rather we must rely on statistical reasoning. It’s possible to look at some code and see if it’s “OO-ish” the same way you can evaluate an animal and see if it belongs to a species (obviously not possible through logical induction!)
Dear anonymous coward,
Excuse me, but evaluating whether an animal belongs to a species is deduction. Induction is coming up with a grouping into species by essential differences in the first place.
The world is indeed full of foggy, undefined rubber-terms like “liberal”, “conservative”, and “object-oriented”. Passively accepting them makes your thinking vague, which is a cognitive defect if I ever heard of one. Thinking is hard and requires careful checking.
It’s funny for someone to claim that thinking in math terms is a cognitive defect, then to go on to make general claims about universals, using mathematical terms. “Determining if any particular concept is valid would be NP-complete?” “We must rely on statistical reasoning?” Let’s not use mathematical terms or anything.
At any rate, you’re just making that up. Or to put it another way: It’s highly probable that it is claptrap.
Excuse me, but evaluating whether an animal belongs to a species is deduction. Induction is coming up with a grouping into species by essential differences in the first place.
You’re not even speaking very clearly when you speak of Induction. Are you talking about Inductive Reasoning or Mathematical Induction?
I assumed you were talking about mathematical induction, but when people talk about “induction vs. deduction” they’re usually talking about inductive reasoning.
The distinction is pretty important, because inductive reasoning isn’t a cognitive defect, it’s a logical fallacy. An example from Wikipedia:
Which is close to the canonically wrong “black swan” example from that trendy bullshit eponymous pop-sci book making the rounds.
So if you’re actually talking about Inductive reasoning, you’re even dumber then I thought. Of course, on the other hand it’s perfectly reasonable to define a programming language through mathematical induction from a few simple premises.
Here’s more from wikipedia:
At any rate, you’re just making that up.
And you didn’t make up the bit about “valid concepts being defined through induction”?
When I said “Determining if any particular concept is valid would be NP-complete?” I thought you were talking about mathematical induction. Since Inductive reasoning isn’t logical, it doesn’t have the same problem, but it’s mostly just a stand in for more advanced forms of statistical modeling. If you go by pure inductive reasoning you could say something like “all the swans I’ve seen are white, therefore all swans are white”. Whereas a more advanced person would say “I’ve seen n swans, they’re all white, therefore there’s an x% chance that at least 99% of swans are white” and then they might look for observational biases.
The world is indeed full of foggy, undefined rubber-terms like “liberal”, “conservative”, and “object-oriented”. Passively accepting them makes your thinking vague, which is a cognitive defect if I ever heard of one. Thinking is hard and requires careful checking.
Who said anything about “passively accepting them”? They still exist and are still useful. Almost every concept we deal with in our daily lives is only superficial. Take a “car” for example. Seems pretty concrete, but there are vehicles for which it is difficult to determine if they cars or trucks, but it would be difficult to communicate with people without being talk or think about “cars” (and each make and model of car could still have gray cases. What about a Honda Accord that had its engine swapped with a Jetta’s?)
What I’m attacking here is your theory of conceptual validity. If it was true, how could you induce the concept of induction itself?
Thinking is hard and requires careful checking.
Thinking certainly seems to be quite a challenge for you. And if you can do it through logical deduction (which is certainly NP-complete) of axioms arrived at through induction (which is a logical fallacy) then clearly you’re doing it wrong.
Ooh crap. The last sentence should read:
Huh, you posted my correction but not the post I was correcting. What a coward you are. Weak.
“Example”,
Your comment posted just fine as far as I can tell, unless you mean some other comment than the rude long one above. Listen, if you’re going to be rude, take it somewhere else. I have no obligation to read it or post it, whoever you are.
It should be clear from the context whether mathematical induction or inductive reasoning in general is being discussed.
Regarding “Inductive reasoning is a logical fallacy.”:
1. No it isn’t. Hume was wrong. I don’t subscribe to his empiricism (or rationalism for that matter, i.e. Popper). There is no “problem of induction”. This problem is invented by committing the error of dropping context. It amounts to arguing that, since it’s not possible to reason from specifics in a given context to a generalization that holds universally in all contexts, induction is therefore invalid. It’s obviously possible to reason from specifics in a given context to a generalization that holds given that context. But if you’re a Kantian, you will necessarily miss that since the context demanded of the knowledge of universals (noumena/forms) in the Kantian/Platonic view is nothing less than omniscience.
2. Isn’t that statement an example of inductive reasoning? You have to know what “inductive reasoning” is and what “logical fallacy is”, by induction no less, before you can deduce that one is an example of the other. Of course, this is assuming that you’re reasoning at all and not just parroting Wikipedia.
It is not necessary to understand induction in order to form concepts by induction. Put another way, you don’t need to understand epistemology to acquire knowledge. “Induction” as a concept (a meta-concept, if you will) is an integration of a vast amount of information, and yet a child uses this process to form his very first concepts.
Look, this discussion involves the very nature of existence and consciousness, so we’re not going to solve it here. Philosophers have been duking it out over this for a very long time indeed. To sum it up, inductive reasoning is assumed by the early philosophers (Socrates, Aristotle), and attacked by modern empiricists and rationalists (Kant, Hume, Popper, although defended by Locke). I reject these attacks, and I’m not alone in my rejection. See “Introduction to Objectivist Epistemology” for a solid defense of induction, and Aristotle’s Posterior Analytics as a supreme example of induction at work in philosophy. See also “Induction in Physics and Philosophy”, a lecture by Leonard Peikoff. A book by the same name, based on that lecture, is in progress by David Harriman. You can read examples of Harriman’s work here: http://www.theobjectivestandard.com/topics/science-technology.asp
See here for a treatment of Popper:
http://www.libertarian.co.uk/lapubs/philn/philn065.htm
Ah, I didn’t realize you were an objectivist. That explains it.
It is not necessary to understand induction in order to form concepts by induction. Put another way, you don’t need to understand epistemology to acquire knowledge.
I never claimed you needed to understand inductive reasoning in order to use it, what I disputed was that inductive reasoning was the only way to form valid concepts. If you can’t arrive at inductive reasoning through inductive reasoning, then inductive reasoning isn’t even “valid concept” itself. And in that case, your definition of “valid concept” contains invalid concepts.
If on the other hand, you were willing to allow concept formation by other means, then perhaps you could find a way to form the concept of induction. But then you would have to accept the validity of any other concepts that could be derived in the same way.
In either case, your statement about conceptual validity would be wrong.
Your argument amounts to showing that induction is invalid by showing that induction is not inductive. This argument presumes what it intends to refute.
Your other premise is that “you can’t arrive at inductive reasoning through inductive reasoning”. This premise is absurd. Of course arriving at a concept of induction requires a process of induction. There is no problem with that. It is not circular, but (to pun a little bit) recursive. I can see how it would be a problem if you hold the primacy-of-consciousness premise, that induction does not exist until somebody thinks of it, but this premise is false. Existence has primacy. Reality is mind-independent. The human mind is a specific kind of thing and its primary means of grasping reality, for the survival of the human, is induction. Identifying this fact requires induction, but this is not a problem because we are identifying it with human minds.
I am not willing to allow concept formation by other means. If you arrive at something by any other means, then you have arrived at something that is, by definition, not a concept. When I say “concept”, I specifically mean a generalization from the essential similarities of units, omitting specific differences. This is a description of the process of induction. If you mean something else when you say “concept”, then you’re talking about something that has no bearing on the present discussion.
Okay so you’re saying that inductive reasoning (not including mathematical induction) exists intrinsically in the universe regardless of the human mind, but we can discover it through it’s own application. That seems reasonable, but I still don’t think that goes far enough to prove the non-existence of other valid concept formation methods, after all if induction is “out there” the other valid concept formation methods are there as well.
Of course you might just say that other types I might think of using would count as a kind of induction as well, but I would say that any useful concept formation method would have to tolerate some exceptional cases, otherwise any attempt to create concepts that apply to the real world would need an enormous list of caveats, and would not be useful for communication or thinking (IMO).
I don’t exactly follow what you mean, but I’d say it like this: induction exists as an attribute of the human mind, regardless of that mind’s (or any mind’s) knowledge of it.
Do note that you cannot prove the non-existence of something. Proof does not apply to things that don’t exist. To call for the proof of a negative is a logical fallacy.
You hit on an important point with the tolerance of exceptional cases. You will find that Objectivist epistemology does indeed describe a method of concept formation by induction in which concepts are tolerant to new information, because they are contextual. If the context expands (e.g. an exceptional case is found), the concept gains a new definition in the expanded context, without contradicting the previous definition. To borrow OO jargon, the expanded definition becomes a superclass of the previous one.
Hello,
I really enjoyed the essay. I find the definition of a valid concept a little abstract though. Can you give an example of a concept which meets the 3 criteria outlined? Would the theory of evolution count?
Thanks,
Sam
Sam,
Yes, Darwin’s theory of evolution is a good example of the inductive method at work. The theory of the atomic structure of matter is another superb example. You don’t have to look far to find concepts that match the criteria outlined. For example: “three”, “criteria”, and “outline”.
excellent discussion, many thanks to all. random bit of food for thought:
http://www.google.com/search?q=object+oriented+calculus+OR+algebra
seems like there are some folks who are taking a stab at making some mathematical canonicalization of OO, for whatever that might be worth.
>anti-conceptual, in that it displaces actual concepts like algebra
-> Coalgebras
>calculus
Read Cardelli’s “A Theory of Objects”.
…
I don’t exactly follow what you mean, but I’d say it like this: induction exists as an attribute of the human mind, regardless of that mind’s (or any mind’s) knowledge of it.
Lots of things exist as attributes of the human mind that are actually illusions. As an Objectivist, you’ll appreciate that most people mistakenly believe in religion, and one powerful reason for that is that people have an innate tendency to anthropomorphize things. We automatically assign motivations to things based on their behavior and how we imagine what would motivate another person or animal to do those things. That’s great when you’re running around in the jungle — you get an intrinsic feeling for dealing with other people and animals. But it also causes people to believe that the world as a whole is another mind.
Induction could be similar. Just because it exists in our minds doesn’t mean it ‘exists’ as anything other then an approximation or a built-in shorthand.
I think you’re shifting the context here so we’re talking about two different things. When I say “attribute of the human mind” I mean that induction is part of the nature of the mind. The mind exists, and induction is an essential part of what it is. When you say that many things that people believe are “actually illusions”, that’s true, but only because inductive reasoning is not automatic or infallible. Induction is a mode of cognition. Religion is not. Religion is the result of an error in inductive reasoning.
You say that inductive reasoning isn’t infaillible, but do you mean that induction itself can lead to wrong conclusions, or only that humans can fail in its use?
What I mean is, there are some fundamental laws of the universe, and some fundamental laws of Mathematics (such that they can be applied by a computer). Are you suggesting that Induction is somehow a fundamental part of the universe as well? Or is it just a “rule of the mind”?
I mean the latter. Reasoning can be done incorrectly. Insofar as it is done correctly, it will lead to knowledge (i.e. to true conclusions).
I don’t understand your second question.
This is where your argument makes a non-sequitur leap. The fact that something cannot be objectively defined does not mean it can ‘mean anything you want’. The usual example in philosophy is the fact that what constitutes a ‘game’ is not well defined. Although it is not well definied, everyone that is involved in discussions about the issue agree that some things definitely are games and some things definitely are not games. The discussion focuses on the ‘grey’ area. Of course, there are arguments that would make *everything* a game and arguments that would make *nothing* a game, but in the end everyone agrees that the term ‘game’ applies only to a limited number of concepts/situations. The term ‘object-oriented’ is of the same sort. Although it isn’t well defined, it is clear to all that there are bounds to its meaning. As such, it is not a ‘non-concept’ and you cannot say that there is ‘no such thing as OO programming’. You may want to argue that it is a useless or debilitating concept and depending on your argument I might agree, but what you are claiming here is, from a philosophical angle, too strict.
Ivo,
So what you’re saying is that concepts are defined by consensus, and the only thing preventing some vague notion that isn’t objectively defined from meaning anything we want is other people’s permission. Do I have that just about right?
The example of “game” is not at all like “object-oriented programming”, because there are certain essential characteristics that, like you say, make something definitely a game. The existence of these shared characteristics of games is what makes the concept of “game” useful in the first place. However, there is no essential characteristic of “object-oriented programming”. It is not at all clear when something is definitely OO, or definitely not OO.
I don’t see what you mean by “too strict from a philosophical angle”. Which philosophy, specifically? Too strict for whom and what purpose?
He’s referring to Wittgenstein, specifically. Please see “Philosophical Investigations”.
It’s actually very difficult to define the essential characteristics of a game. I invite you to try.
Firstly, let me stress that I largely agree with your blogpost. A concept like ‘object-oriented’ is often assumed to have more meaning, and more argumentative force, than it actually has. I agree that for many purposes it is warranted to
and I agree that
My point was merely that when I connect the philosophical (but perhaps ‘logical’ would be a better word) dots, I do not see support for the statement that
That may seem an unimportant detail when we end up with the same conclusion, but it can be dangerous to think of it in that way.
In your response, you imply that all things we call a ‘game’ have a shared characteristic. However, this is not the case. There are arguments that make *everything* a game: life is a game, meant to be enjoyed, but not to be taken seriously. There are also argument that make *nothing* a game: even hide-and-seek is a way of measuring skill and it influences the social rankings of the participants. If all ‘games’ were united by at least one shared characteristic, such arguments could not exist.
These ‘extreme’ arguments are not dismissed because they are wrong: they are dismissed because they do not reflect the way the word ‘game’ is actually used. Some things aren’t considered a game by anyone. This is not because ‘game’ has a meaning that could not possibly be applied to the thing. It is because — and yes, this is almost circular — nobody actually uses the word ‘game’ to describe the thing.
That is the crux of the matter: concepts are defined by their use. Even if what people mean by ‘Object-Oriented programming’ is fuzzy, it’s often still clear what they mean and you can’t dismiss their intention by pointing out that ‘Object-Oriented programming’ isn’t well defined. That may not be your intention, but it is something that you are easily led to when you have arguments to the effect that ‘Object-Oriented [is] a non-concept’. It *is* a concept and it is a meaningful concept at that. It’s also a fuzzy, convoluted, impractical concept that is often wielded as if it is authoritative by itself and your upcoming deconstruction of the concept may be very helpful in explaining it’s limitations.
Ivo,
On the contrary, there is no support for the statement that object-oriented is a concept. By stating that it is a non-concept, I’m merely pointing out the absence of this support, and I cannot be called upon to prove a negative.
Why is it dangerous to think of it that way? What are the harmful consequences?
If it is not the case that things called “game” have a shared characteristic, then why do we use that term? What could it possibly mean? Why is there even such a word in the first place? Of course, using the word presumes that it means something, and meaning something is to refer to something specific. For example, when you say “life is a game”, it is a metaphor, but you’re already assuming knowledge of what “game” means. When you think of games, you have some specific concrete examples ready to be called to mind, and they all share some characteristic that warrants abstraction into a single cognitive unit: game.
You have the crux of the matter correctly identified, but your conclusion is wrong. Concepts are not social conventions. What you’re proposing is a subjective view of concepts, where words are mere sounds used to tickle certain parts of the brains of other people, and where language is primarily a communication tool. But language is not primarily a communication tool. Language is a cognition tool. For what is it that we communicate when we speak? We communicate thoughts, but first we must think, and we think in concepts.
There are essentially three different views of concepts that have been proposed in the last few thousand years. First, there’s the intrinsic (realist/platonic) theory of concepts. In this view, concepts exist either in the things themselves, or they exist in some other dimension and the particular things are just instances of the concepts (i.e. a chair is an instance of the pure form Chair). Then there’s the subjective theory (yours), in which concepts are all in the mind. In this view, there is no “chairness” anywhere, and so a thing is a chair only because you or someone else calls it that. So if people start calling hamsters chairs, then a hamster is a chair. The third view (mine), is the objective theory of concepts, in which a concept is a relationship between the mind and the concrete things to which the concept refers. I urge you to read “Introduction to Objectivist Epistemology” for a good rundown of this theory of concepts, but in very brief terms it is this: A concept is a mental integration of two or more units possessing the same distinguishing characteristic(s), with their particular measurements omitted. In this context, “unit” is either a concept, or a percept (something perceived and integrated by the senses into a unit). So the process of concept-formation is a process of abstracting from experience.
That last sentence should tickle the programmer in you, at least a little. This is because the process of abstraction with regard to concepts is the exact same process that we use to come up with new abstractions in programming. In the paragraph that follows, I will use the term program to refer to any programmatic expression (or “object”, if you’re thus inclined).
As an example, think of types. What is a type, in programming? Some might say that a type is a pure Platonic form, and that programs are mere instances of these types. This would be the intrinsic view of types. But the objective view is that types are abstractions from programs. In other words, types are theories about programs, and specific programs are proofs of those theories (see Curry-Howard). By the same token, concepts are theories about reality, and the concrete things that inhabit reality constitute proof of those theories.
I’m not sure if “duck typing” represents the subjectivist view of types, but it’s certainly a satisfying analogy.
But I’m getting a little into the content of my next post. Let’s wait for that and then discuss.
It isn’t clear at all what people mean by “Object-Oriented”, even to themselves, and no amount of nudges and winks will make it so. Yes, I can dismiss their intention until and unless I see a reason to accept it.
“Concepts are not social conventions”
Wow. I understand that concepts are physical structures of a sort, but that is not what I think that you mean.
In it’s initial implementation, Object-Oriented programming was motivated by the desire to have compiler support for the construction of function tables.
In its mass-market implementation, Object-Oriented programming was justified through the idea of domain modeling, which would help to facilitate requirements discovery and negotiation.
In either case, it was a series of techniques adopted by a community looking to do work more efficiently.
Now 50 years on down the road, the methods and technologies have been expanded and morphed to deal with all kinds of practical implementation issues, to the point that the concept is now largely self-referential. Maybe that explains your difficulty.
Are you certain that in some not-to-distant future, the same won’t be true of “Functional Programming”?
And does it really matter? Why do we invest in concepts? Only because they have utility. We learn of utile concepts from others that have found them useful (nobody, not even Aristotle, invents everything from first principles). So the transmission and survival of concepts is largely a social phenomenon.
The same will definitely not happen with functional programming, because FP is a well-defined specific thing, like long division or differential calculus.
So you think that you do not understand what someone means when he uses the phrase ‘Object-Oriented’? You say that you have read countless of texts and had countless discussions in which you were just reading and talking gibberish? If so, then how could you make sense of those texts and discussions? And if not, then how did you determine what the ‘non-concept’ meant?
As an aside, you are right that I’m a subjectivist. I will try, as far as I am capable, to interpret your answer to questions above in the light of your epistemology.
Ivo,
I could not make sense of those texts and discussions, ultimately. That’s the whole point. The reason I wasn’t able to make sense of them is that they do not make sense, i.e. they do not integrate with other knowledge. A non-concept does not have a meaning. It is the sound of the voice of a parrot, signifying nothing.
I’m glad that you’re able to identify yourself as a subjectivist. The first step to recovery is recognising that you have a problem.
You can conclude with hindsight that they did not make sense, but I’m sure you didn’t feel that way at the time (or they would be very strange conversations)? How could you think you were having a sensible conversation if some words were completely devoid of meaning? Or if you do not consider them devoid of meaning, then I wonder how you see the relationship between concepts and the meaning of words.
Aside: subjectivism is an uncommon position for a physicist. I do not experience it as a problem :).
Ivo,
In hindsight, I liked the OO religion and I wanted it to make sense. I found the analogy between physical systems and software systems vaguely satisfying, as I’m sure others do. But feeling does not fact make. As an analogy, the fact that gods do not exist doesn’t preclude people from having seemingly coherent conversations about them.
As for the relationship between concepts and the meaning of words: Words are symbols that refer to concepts, and which words denote which concepts is largely arbitrary. They’re perceptual mnemonics used to recall the concept. I.e. you can’t grasp a concept just by learning the word, you have to observe concretes and abstract their similarities, omit their differences, and integrate them into a mental unit which is the concept. No matter how many times you’ve heard the word “dog” and had it explained to you, you can’t grasp what a dog *is* until you’ve experienced both dogs and a lot of things that aren’t dogs. And once you do form that concept, you realise that others have formed that concept as well, and that they use the word “dog” for it, so you decide to use that word too.
Aside: a common symptom of a mental problem is lack of insight into the fact that one has a problem.
I agree that coming up with a definition of OO that everybody agrees with is not easy. But, I figure just because e.g. Java is crap doesn’t mean Alan Kay is wrong. He coined the term, no?
http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay_oop_en
If there are lots of people who get the definition of the concept wrong, that’s not to say there is actually no such concept. Just that a lot of people don’t know what the original real core correct concept really is. The hullabaloo about how OO is to emulate sundry objects in the real world is just misleading OOD trash somebody else came up with.
Corporate techno buzzword hype marketing b.s. (OO! Java! etc.) kills.
-sincerely.
If every word would denote a concept, there cannot be a ‘non-concept’ for which there is a word. However, you assert that ‘object-oriented’ is a non-concept. The question then arises: what *does* such a word refer to? May be the concept of ‘unknown word’? But if I were to use unknown words, you would clearly understand that you didn’t understand what I was talking about and that wasn’t the case here.
You have had meaningful conversations in your life, in which you understood what the other meant, even though he used wrong, even non-existing, words. The wrong word can denote the right concept, because you are capable of associating the word with the concept. Not because the words sounds alike, look alike or have similar meanings, but because you are able to ascribe meaning to words based on the context.
The danger that I referred to earlier is that you will tempt yourself and others to dismiss what someone says, solely because you have declared one of the terms that he uses to express himself as meaningless. Meaning does not get lost when a single word does not have a clearly defined meaning.
On the subjectivism: I don’t think I suffer from more problems than objectivists and those in between. We’re all walking tightropes.
Ivo,
Don’t be ridiculous, not every word denotes a concept. For example, the words “slebfert”, “fraxmond”, “numberwang”, and the hyphenated phrase “object-oriented”.
I’ll take my chances demanding that people know what they mean by what they say, danger be damned.
If some words do not denote concepts, then what do they denote? They must denote something, otherwise you couldn’t make sense of the sentence in which they are used. If someone says “The snow was grue”, then you usually guess that he was either trying to say that the snow was green or that the snow was blue. How could you possible infer that correctly if ‘grue’ did not denote any concept?
Ivo,
I won’t engage in analyzing a folly. What you’re asking is either proof of a negative, or the rational consideration of the arbitrary. Either one of those is a logical error. Philosophizing is to philosophy as rationalization is to reason.
You’ve never heard someone make slip of the tongue, putting in a completely different word than he intended, while still understanding what he meant?
Well, let me put it to you bluntly then, to repay ‘ridiculous’ and ‘folly’ in kind: you are sweeping the complexities of language and meaning under the rug, seemingly in order to be able to dismiss the term ‘object-oriented’ and anyone that uses it. The assertion that a word needs to denote a single concept is preposterous, as is the implicit assertion that _you_ have managed to do what men like Kripke and Putnam failed to do: come up with a theory of meaning that isn’t riddled with problems. Every single utterance calls upon your conception of ‘language’ of ‘speech’. An utterance ‘red’ does not only call upon your conception of ‘red’, but also of ‘color’, ‘light’ and, depending on the exact colorshade that is meant ‘blood’, ‘coral’ or, if you are colorblind, ‘that shade of grey’.
I offer you two words for contemplation: ‘context’ and ‘Putnam’.
Ivo,
I haven’t asserted that a word needs to denote a single concept. It can denote several, or none. Nor have I asserted that I’ve come up with a “theory of meaning”. I have, however, come across Objectivist epistemology. You should check it out.
It’s not our job to try to infer what people mean by their “utterances”. It’s up to them to first demonstrate that they understand what they’re saying, before we attempt to do that for them.
That’s about all I’m going to say about this for now.
In a fully or partly objectivist epistemology, you still cannot have words that are ‘non-concepts’. Every word invokes concepts, such as ‘unknown word that I can look up in an English dictionary, ‘unknown word whose meaning I don’t understand’ or ‘word with which the user could denote several possible concepts’. Only when you fail to recognize it as a word at all, could you say an utterance is a ‘non-concept’. Even then, you may understand what someone is conveying.
“It’s not our job”…. it’s what you do, every time you communicate! Daniel Dennett once wrote that a philosopher should not deny what he experiences. He should be very well aware of what he experiences and make sure it is accounted for. You cannot deny that every day, you infer what people mean. Language is (necessarily) vague: that is what gives it such power and the possibility to convey richness and shades of meaning. The theory of meaning you built your case upon is naive and too simple to have anything to do with reality. It is a toy model.
Moreover, declaring a central term in the vocabulary of people you are trying to convince to be a ‘non-concept’ is a sure way of crippling the dialogue. You have to have extremely good arguments before you do such a thing, lest you end up in discussions like these, where we stray far away from the point you were trying to make. If you are really interested in ridding the world of unclear, confusion talk about ‘object-oriented’ things, then declaring the term a ‘non-concept’ is the wrong way to go at it.
Really, I am quite sympathetic to what you set out to do and the reason for this criticism is that I wish you to succeed.
Ivo,
Thanks for your feedback.
Are you aware that concept formation and categorisation are amenable to experimental investigation, that the experiments have been done and that the Aristotelean model turns out to have almost no connection to what people actually do when they understand the world?
Keith,
I don’t know how to understand your statement, as it appears to have a type mismatch. Can you provide a reference?
Sure. The most significant papers are collected in “Concepts: Core Readings”, Margolis and Laurence (editors), MIT Press, 1999 which has been digitised by google books.
I suggest you start with Rosch’s “Principles of Categorisation”, itself a survey of her own earlier work. http://books.google.com/books?id=sj1gczQ-7K8C&lpg=PP1&pg=PA189#v=onepage&q=&f=false
I just stumbled onto this blog/post/discussion and see that you’ve touched on some of the things I started writing about in 2006 when first learning Philosophy 101 (at age 50 after 30+ years of software engineering).
Your audience may be interested in http://ExistentialProgramming.com/ (although with the caveat that the early years were just transcribed notebook entries and I was just starting to learn western philosophy….but I think the blog entries have gotten better over time). :-)
On a tangent note, I just read this: http://www.cs.utexas.edu/~wcook/Drafts/2009/essay.pdf which provides an interesting view on what is an object in terms of data abstraction.
As I wrote in Existential Programming, OOP uses objects for several different categories of entities as Aristotle would have defined them. This causes confusion when programmers don’t realize that objA==objB doesn’t mean EntityA==EntityB, nor does objAobjB mean that EntityA EntityB.
…the formatting is goofed up above…(no way to preview comments!?)…so here is attempt #2…
As I wrote in Existential Programming”, OOP uses objects for several different categories of entities as Aristotle would have defined them. This causes confusion when programmers don’t realize that objA==objB doesn’t mean EntityA==EntityB, nor does objA != objB mean that EntityA != EntityB.
Yeah, just don’t even try. To put links in comments, just paste the link.
Hi Rúnar,
Your conclusion, that “object-oriented programming does not exist,” can be easily disproven within the narrow construct of your argument. I’m not sure I agree with the way you have set up the question, but I’m willing to work with it.
There is one defining and essential concept that is present in every object-oriented or object-based system. This idea is variously called “dynamic dispatch”, “message passing”, “virtual method invocation”, “polymorphism”, or even “dynamic binding”. Some of these words have extra meanings or connotations (especially “message” and “polymorphism”) but that does not prevent them from having precise meanings in the context of specific systems / languages / papers. You could arrive at concept inductively by examining OO languages and systems (criteria #1). It is also essential (criteria #2). Thus it meets your criteria for concept formation. You yourself acknowledge this definition, when you say:
“In the case of Smalltalk, for example, it is closed over late-bound lambda expressions with an implicit “self” argument. This is fairly precise, so I’m willing to concede that if there’s anything to be called OO, then it’s merely this. In that case it seems to boil down to late vs. early binding, and a.f(b) instead of f(a, b).”
The difference between a.f(b) and f(a, b) is quite significant, not just a matter of “which end to open the egg on”. The connection is related to the duality of algebra and co-algebra, or programming with observations rather than pattern matching. Later several people mention the idea of “first-class modules” which would also count as object-oriented. “Dynamic dispatch” is exactly the essential characteristic that I discuss in my essay, which was cited in the previous comment by Eric. The idea is that objects are collections of first-class functions/procedures. This obviously generalizes functional programming. If that was all it was, then it would not be interesting. The focus of my essay is to point out that the interesting thing about object-oriented programming is the use of (collections of) functions to represent data. OO eschews use algebraic data types, with constructors and pattern matching, because they are fundamentally closed and not extensible. The OO approach optimizes openness and extensibility, at the cost of concision and optimization. Perhaps its not an accident that the reference to my essay ended the discussion.
The fact that many people misunderstand object-oriented programming and want it to be more than it is does not cause the concept to vanish. If so, then “functional programming” would also not exist. Like you, I prefer definitions that have technical content and are not just social conventions. The idea that objects model the real world is not fundamental, it was added later. Inheritance also has a precise technical definition (which is different from normal composition) but it is neither essential to OO or unique to it, as many functional programs use inheritance (where it is called “open recursion”). Even subtyping and imperative effects are not essential. Inheritance, subtyping, and imperative updates are very useful, but not absolutely essential.
There should not be any fundamental conflict between OO and functional styles. They both exist and have benefits and drawbacks. They are just different ways of modeling computation, and they both exist happily within various versions of the lambda-calculus, if you care to look for them.
Since we are both in Austin, perhaps we can get together sometime to discuss? I’m easy to find… just google “William Cook”.
“The focus of my essay is to point out that the interesting thing about object-oriented programming is the use of (collections of) functions to represent data. OO eschews use algebraic data types, with constructors and pattern matching, because they are fundamentally closed and not extensible. The OO approach optimizes openness and extensibility, at the cost of concision and optimization.”
The problem here is that the *set of functions* is closed and not extensible; it seems to me that your claim of openness here runs straight into the expression problem – you can either have an open set of types and a closed set of operations, or an open set of operations and closed set of types. In any nontrivial project, we will tend to need both at some point but will have to choose one and have some hacky workaround for the other.
Yes, you are right that there are different dimensions of extensibility. Just to set the record straight, the “expression problem” was named by Wadler, but I was the first to formalize it thoroughly, and it was originally discovered by John Reynolds. See “Object-oriented programming versus abstract data types” on my publication page http://www.cs.utexas.edu/~wcook/publications.htm
The deeper issue is that depending on what kind of system you are building, different kinds of extensibility matter more than others. For building GUI applications, window systems, file systems, and such things, it is often more important to create new *kinds* of objects that implement standard interfaces, than it is to add new methods to existing data types. This is the reason why OO enabled the creation of the modern desktop and operating systems. If you are doing compilers or type systems, then it might be more useful to add functions to an existing data type. Another point is that OO has, out of the box, a better overall story for extensibility: objects support data extensibility directly, and inheritance can be used to extend objects with more functions. With pattern matching on disjoint sums, it is quite difficult to extend the data type. More recent solutions to the Expression Problem tend to combine OO and functional ideas, and we haven’t yet seen the end of research on this topic.
William,
I have read your paper and I think it’s good and thought-provoking. I tend to disagree that the “open data types” approach optimizes openness, since then the operations are necessarily closed (Wadler’s expression problem), but we’ll put that to the side. I absolutely agree that there is no conflict between FP and OO, and there has never been one.
The kind of first-class module system that you describe is generally what I’ll refer to as a system having “objects”. But then again it’s not entirely clear to me what the difference is between that and polymorphic functions. For example, Haskell has a type-class facility that lets you classify types according to the operations they support. So for example, the type of the
+
function isforall n. Num n => n -> n -> n
. It’s not decided until an argument is passed which+
function to call. At that time, an appropriateNum
dictionary is passed and the appropriate implementation is dispatched. So Haskell has that kind of dynamic dispatch. However, you cannot manipulate these dictionaries as first-class values in the language. Are they “objects”? Is Haskell now an “object-oriented” language? Or are these first-class semantics required? You could very well implement the same kind of type-class mechanism with a record-based language like MLPolyR, and that definitely would let you manipulate the dictionaries as first-class values. Would you then be doing object-oriented programming? It’s not at all clear.It seems to be the rule that if you feel you’re doing object-oriented programming, then you are.
In normal/typical uses of Haskell type-class, the dictionaries are not first-class. They are bound statically at compile time. As a result, they are NOT objects. It is possible to use an existential type to make the type-class dictionary be first-class, but this is non-standard. But the result does work more like an object. There is some discussion of this here, but I’m not sure it is definitive: http://www.haskell.org/haskellwiki/OOP_vs_type_classes
It is possible to create objects in almost every functional language, including Haskell, ML and Lisp/Scheme. There are many papers and libraries that show how to do this. This does NOT make these languages be “object-oriented”. This example illustrates the true meaning of “object-oriented”: it is a language/system that has special support for creating and using objects. This kind of extra support includes subtyping, inheritance, tooling with IDEs, etc. Similarly functional languages have special support for making functional programming useful, for example Haskell’s “do” notation. It is possible to write pure functional programs in most OO languages, but that does not make such languages be “functional languages”.
Interesting post. I’m eager to read more.
Have you heard of the “Treaty of Orlando”? It’s a consensus on different mechanisms for sharing behavior in OO languages.
I’ve made this point before and gotten a different definition from every person who tried to argue that there is a definition for “Object-oriented.”
Yes, I’m very aware of the Treaty of Orlando. I was a PhD student with Lynn Stein at Brown the time, and consulted with her. The Treaty focuses on the relationship between delegation versus inheritance and prototypes versus classes. In the end the two mechanisms are closely related, but they just change the time at which binding is specified. Prototypes/delegation works at runtime, while classes/inheritance are more static. Both approaches use “objects” as defined above.
As Keith Braithwaite pointed out, the idea that a category requires that all elements share common properties, none of which can be omitted, is not borne out by empirical evidence. In addition to his references, see /Women, Fire, and Dangerous Things/. It adds other examples to Wittgenstein’s “game”. For example: “Is the pope a bachelor?” The answer from most people is “yes, but…” and people qualify that answer for sound reasons: doing so communicates.
Categories as we use them typically have central elements, ones that are more X-like than others: For people of a certain age, Hugh Hefner was the prototypical bachelor. Categories also have fuzzy boundaries: the pope is near such a boundary. A widower is another less-prototypical case: if we refer to a widower as a bachelor, that’s both a true statement and something we might need to elaborate on to avoid confusing our co-respondent.
It’s also useful to try (I have!) to come up with a definition of “chair” that distinguishes them from “stools”. (Note that many bar stools have arms and a back.) Don’t forget the existence of bean-bag chairs. Now, you might be tempted to say that we are all fools and that bean-bag chairs aren’t chairs at all. But how can you justify that with induction? On what do you base the induction to what a “chair” is, if not on what people point to when they refer to chairs?
Which gets to an answer to this of yours:
If it is not the case that things called “game” have a shared
characteristic, then why do we use that term? What could it
possibly mean? Why is there even such a word in the first
place?
I’m a fan of the American Pragmatism of Pierce, James, and Dewey. It was born in part as a philosophical reaction to evolution (see /The Metaphysical Club/ for an OK history), and it emphasizes that we humans are beings in a world that we constantly act to change, with greater or lesser success. They thought that a philosophy that didn’t grapple with that must be sterile. A Pragmatist’s answer to “Why is there even such a word in the first place?” would be something like “Because it enables us human beings to achieve our goals in the world — which world, for humans, is not just physical but also intensely social — well enough. We don’t have a precise definition for ‘game’ because that would serve almost no one. It’s like the reason our eyes are fooled by optical illusions: getting better at not being fooled didn’t confer any advantage.”
William James’s /Pragmatism/ is a good early introduction. Menand’s /Pragmatism: A Reader/ is decent too, especially because it gives some exposure to Pierce, who doesn’t get enough.
Austin’s /How to Do Things with Words/ fleshes out the picture. He points out that not all utterances are propositional (“The cat is on the mat.”, “A chair is…”). Many are also what he calls “performative” in that they change the world (“I now pronounce you man and wife.”). Some have argued that all language has a performative component; that utterances are uttered so as to change something in the world. (As the poet and programmer Richard P. Gabriel has put it, “a poem is a program that executes in the reader’s brain.) A word is a tool. We don’t judge a hammer according to some platonic ideal of hammer-ness; we judge it by whether it serves its purpose. Similarly, we oughtn’t judge “game”, “chair”, or “object-oriented programming” by the precision of their definitions, but by whether they serve our purposes.
P.S. You mention somewhere that species are identified by inductively discovering common traits. I recommend looking into cladistics. I believe it’s still pretty dominant in the taxonomy that actual biologists do. Story: back in the early-ish days of object-oriented programming, I read a piece in The Economist about cladistics, and looked up someone in the University of Illinois biology department, all fired up to see how cladistic techniques might be applied to what (even then) seemed to me the unworkably rigid categorizations of class-based languages. When I emailed him asking to set up a meeting, he declined by saying that he got someone from computer science contacting him every couple of years, and it was always a bust, because they just weren’t prepared for the messiness of the world taxonomists have to deal with.
Yes, I can see that. If it’s not glaringly obvious, I am not a fan.
Well! That’s settled, then!
I’m still waiting for Runar to define the essential characteristics of a “game”. Should be a straightforward task — or do they not exist, either?
Scroll up a bit.
it seems you asserted that there were essential characteristics of a “game” but perhaps a fuller answer would not quite fit in the margin.
Here are four games to think about:
chess, a game of catch, military ‘war games’, and solitaire.
OK, so how do you know those things are games? Is that established a priori?
That is a rhetorical question, by the way. This question of defining “game” is a red herring.
If I can quote you:
“But try as you might, there’s no objective definition of what ‘object-oriented’ refers to.”
And I suppose the same goes for “game” (after first claiming there was an objective definition, you appear now to have changed your mind). I hardly see how that’s a “red herring”, it seems rather an important point about the nature of language and meaning.
I’m just trying to point out that you seem to already know a lot of games. So there has to be some characteristic (or multiple possibly non-overlapping sets of characteristics) they have in common. Otherwise, why would you refer to them with the same term? The answer is that it serves a cognitive utility. There is a difference between saying “this is a game” and “this is not a game”. There are some attributes that you and I find convenient to refer to that way, but the use is highly contextual and fluid. But “object-oriented” is not that way at all. There is no useful distinction at all between programming that is object-oriented and programming that is not. It is a mere shift in perspective of the programmer.
Right, and “object-oriented” must also serve a cognitive utility — otherwise, why would the term exist? Evidently, even you find the word useful to describe a “shift in perspective” on the part of the programmer. Is that an “object-oriented perspective”?
The term exists to make a distinction without a difference.
I don’t have anything else to say about this, so feel free to get in the last word.
I am happy to accept your final, resplendently incoherent answer, that the term “object-oriented” exists for the precise purpose of making a distinction without a difference.
It’s settled from my perspective anyway. See The Menace of Pragmatism.
You’ve just roamed thru most of the topics I’ve blogged about the last few years (ExistentialProgramming.com) …like most philosophical discussions, it all comes down to definitions, and definitions depend on your purpose for asking…see this post about science using 20+ competing definitions of “species” and which you pick depends on why you want to know…
http://existentialprogramming.blogspot.com/2010/05/purpose-of-thing-is-in-us-as-well-as-in.html
Ever since reading Cook’s paper, I’ve been hoping for some better resolution between OO-impressed and OO-unimpressed camps. The definition of “objects” /is/ unfortunately too easily reduced to a semantic debate.
After reading Cook’s paper, I started using the term Simulation-oriented programming to refer to what seems to be attacked by posts like this one and Kiselyov’s (http://okmij.org/ftp/Computation/Subtyping/). These posts often have arguments that seem to presume a coupling between objects and features like inheritance and mutable state. I try not to use the term “object orientation” to avoid confusion. But when I say “object” these days, I’m find myself talking about them under the constraints Cook defines. Although the Cook’s definition for objects may not be commonly accepted, I prefer it for its apparent 1) consistency with prior art 2) concreteness 3) minimalism 4) elucidation of the duality between ADTs and objects. This discussion of this duality seems much more meaningful than quibbling over the clearly false dichotomy between a never-well-defined “OO” and the sometimes-misunderstood “FP.” I really like that under Cook’s terms, untyped lambdas are /both/ objects /and/ pure functions, which makes this distinction between “OO” and “FP” seem particularly uninteresting. Comparing objects to ADTs seems more to the point and less intellectually evaporative.
However, I’m far from an authority on such things. I’m really uninterested in baseless proofs of the existence of something as vague as “object-orientation.” I’m not even really interested in deconstructing Cardelli’s object calculi, which is far from minimal and also seems to miss a lot of essence of the formalized abstractions. Instead, I’d like to see more discussion about Cook’s comparison of the pattern matching of ADTs to the autognosticism of objects. I’m also interested in arguments for or against looking at objects as coalgebras. Even if we don’t call these coalgebras “objects,” the coalgebra should continue to exist.
Albeit, programmers may disagree over the particulars of OO programming, at least one thing it has meant to me aside from the particulars (encapsulation, polymorphism, etc.) is that the abstraction should bind the methods and the data to the object the program is representing. This is different from functional programming where the methods would exist separate from the data structures on which they operate.
Another thing to understand when programming is that a programmer builds up some mental model around which to think about the problem he’s addressing. That mental model does affect how he constructs and reasons about software development.
The OO paradigm does closely correlate to how people think about things in an abstract sense. Even before I gained years of programming experience, when I first was introduced to the OO paradigm, it felt natural. It wasn’t much of a leap to make to think it terms of properties and methods bound to the objects to which they belonged.
I have since retrained my brain to think think in more functional terms, and even as I prefer the paradigm it does present a less natural mental model. Because of familiarity and experience I handle it fine, but I handle it as an experienced programmer. I can’t imagine that a layperson would conceptually reason about a problem using a mental model more closely akin to functional programming than to object oriented programming.
There is always some dissonance between the actual code abstractions and the mental model one uses to reason about real-world problems inside the vacuum of a computer. I believe that the average person/programmer will have an easier time understanding and maintaining code when that dissonance is reduced. That is, the closer the code can reflect the mental model in which a person thinks about a problem, the easier that code will be to understand and maintain. Again, I refer to average programmers as it may take a great deal of conditioning for a person to pick up the functional paradigm/way of thinking.