The worst mistake of computer science

Posted on by Paul Draper

D'oh!

Uglier than a Windows backslash, odder than ===, more common than PHP, more unfortunate than CORS, more disappointing than Java generics, more inconsistent than XMLHttpRequest, more confusing than a C preprocessor, flakier than MongoDB, and more regrettable than UTF-16, the worst mistake in computer science was introduced in 1965.

I call it my billion-dollar mistake…At that time, I was designing the first comprehensive type system for references in an object-oriented language. My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.
– Tony Hoare, inventor of ALGOL W.

In commemoration of the 50th anniversary of Sir Hoare’s null, this article explains what null is, why it is so terrible, and how to avoid it.

What is wrong with NULL?

The short answer: NULL is a value that is not a value. And that’s a problem.

It has festered in the most popular languages of all time and is now known by many names: NULL, nil, null, None, Nothing, Nil, nullptr. Each language has its own nuances.

Some of the problems caused by NULL apply only to a particular language, while others are universal; a few are simply different facets of a single issue.

NULL…

  1. subverts types
  2. is sloppy
  3. is a special case
  4. makes poor APIs
  5. exacerbates poor language decisions
  6. is difficult to debug
  7. is non-composable

1. NULL subverts types

Statically typed languages check the uses of types in the program without actually executing, providing certain guarantees about program behavior.

For example, in Java, if I write x.toUppercase(), the compiler will inspect the type of x. If x is known to be a String, the type check succeeds; if x is known to be a Socket, the type check fails.

Static type checking is a powerful aid in writing large, complex software. But for Java, these wonderful compile-time checks suffer from a fatal flaw: any reference can be null, and calling a method on null produces a NullPointerException. Thus,

  • toUppercase() can be safely called on any String…unless the String is null.
  • read() can be called on any InputStream…unless the InputStream is null.
  • toString() can be called on any Object…unless the Object is null.

Java is not the only culprit; many other type systems have the same flaw, including of course, AGOL W.

In these languges, NULL is above type checks. It slips through them silently, waiting for runtime, to finally burst free in a shower of errors. NULL is the nothing that is simultaneously everything.

2. NULL is sloppy

There are many times when it doesn’t make sense to have a null. Unfortunately, if the language permits anything to be null, well, anything can be null.

Java programmers risk carpal tunnel from writing

if (str == null || str.equals("")) {
}

It’s such a common idiom that C# adds String.IsNullOrEmpty

if (string.IsNullOrEmpty(str)) {
}

Abhorrent.

Every time you write code that conflates null strings and empty strings, the Guava team weeps.
Google Guava

Well said. But when your type system (e.g. Java, or C#) allows NULL everywhere, you cannot reliably exclude the possibility of NULL, and it’s nearly inevitable it will wind up conflated somewhere.

The ubiquitous possibility of null posed such a problem that Java 8 added the @NonNull annotation to try to retroactively fix this flaw in its type system.

3. NULL is a special-case

Given that NULL functions as a value that is not a value, NULL naturally becomes the subject of various forms of special treatment.

Pointers

For example, consider this C++:

char c = 'A';
char *myChar = &c;
std::cout << *myChar << std::endl;

myChar is a char *, meaning that it is a pointer—i.e. the memory address—to a char. The compiler verifies this. Therefore, the following is invalid:

char *myChar = 123; // compile error
std::cout << *myChar << std::endl;

Since 123 is not guaranteed to be the address of a char, compilation fails. However, if we change the number to 0 (which is NULL in C++), the compiler passes it:

char *myChar = 0;
std::cout << *myChar << std::endl; // runtime error

As with 123, NULL is not actually the address of a char. Yet this time the compiler permits it, because 0 (NULL) is a special case.

Strings

Yet another special case happens with C’s null-terminated strings. This is a bit different than the other examples, as there are no pointers or references. But the idea of a value that is not a value is still present, in the form of a char that is not a char.

A C-string is a sequence of bytes, whose end is marked by the NUL (0) byte.

 76 117  99 105 100  32  83 111 102 116 119  97 114 101  0
 L   u   c   i   d       S   o   f   t   w   a   r   e  NUL

Thus, each character of a C-string can be any of the possible 256 bytes, except 0 (the NUL character). Not only does this make string length a linear-time operation; even worse, it means that C-strings cannot be used for ASCII or extended ASCII. Instead, they can only be used for the unusual ASCIIZ.

This exception for a singular NUL character has caused innumerable errors: API weirdness, security vulnerabilities, and buffer overflows.

NULL is the worst CS mistake; more specifically, NUL-terminated strings are the most expensive one-byte mistakes.

4. NULL makes poor APIs

For the next example, we will journey to the land of dynamically-typed languages, where NULL will again prove to be a terrible mistake.

Key-value store

Suppose we create a Ruby class that acts as a key-value store. This may be a cache, an interface for a key-value database, etc. We’ll make the general-purpose API simple:

class Store
    ##
    # associate key with value
    # 
    def set(key, value)
        ...
    end

    ##
    # get value associated with key, or return nil if there is no such key
    #
    def get(key)
        ...
    end
end

We can imagine an analog in many languages (Python, JavaScript, Java, C#, etc.).

Now suppose our program has a slow or resource-intensive way of finding out someone’s phone number—perhaps by contacting a web service.

To improve performance, we’ll use a local Store as a cache, mapping a person’s name to his phone number.

store = Store.new()
store.set('Bob', '801-555-5555')
store.get('Bob') # returns '801-555-5555', which is Bob’s number
store.get('Alice') # returns nil, since it does not have Alice

However, some people won’t have phone numbers (i.e. their phone number is nil). We’ll still cache that information, so we don’t have to repopulate it later.

store = Store.new()
store.set('Ted', nil) # Ted has no phone number
store.get('Ted') # returns nil, since Ted does not have a phone number

But now the meaning of our result is ambiguous! It could mean:

  1. the person does not exist in the cache (Alice)
  2. the person exists in the cache and does not have a phone number (Tom)

One circumstance requires an expensive recomputation, the other an instantaneous answer. But our code is insufficiently sophisticated to distinguish between these two.

In real code, situations like this come up frequently, in complex and subtle ways. Thus, simple, generic APIs can suddenly become special-cased, confusing sources of sloppy nullish behavior.

Patching the Store class with a contains() method might help. But this introduces redundant lookups, causing reduced performance, and race conditions.

Double trouble

JavaScript has this same issue, but with every single object.
If a property of an object doesn’t exist, JS returns a value to indicate the absence. The designers of JavaScript could have chosen this value to be null.

But instead they worried about cases where the property exists and is set to the value null. In a stroke of ungenius, JavaScript added undefined to distinguish a null property from a non-existent one.

But what if the property exists, and is set to the value undefined? Oddly, JavaScript stops here, and there is no uberundefined.

Thus JavaScript wound up with not only one, but two forms of NULL.

5. NULL exacerbates poor language decisions

Java silently converts between reference and primitive types. Add in null, and things get even weirder.

For example, this does not compile:

int x = null; // compile error

This does compile:

Integer i = null;
int x = i; // runtime error

though it throws a NullPointerException when run.

It’s bad enough that member methods can be called on null; it’s even worse when you never even see the method being called.

6. NULL is difficult to debug

C++ is a great example of how troublesome NULL can be. Calling member functions on a NULL pointer won’t necessarily crash the program. It’s much worse: it might crash the program.

#include <iostream>
struct Foo {
    int x;
    void bar() {
        std::cout << "La la la" << std::endl;
    }
    void baz() {
        std::cout << x << std::endl;
    }
};
int main() {
    Foo *foo = NULL;
    foo->bar(); // okay
    foo->baz(); // crash
}

When I compile this with gcc, the first call succeeds; the second call fails.

Why? foo->bar() is known at compile-time, so the compiler avoids a runtime vtable lookup, and transforms it to a static call like Foo_bar(foo), with this as the first argument. Since bar doesn’t dereference that NULL pointer, it succeeds. But baz does, which causes a segmentation fault.

But suppose instead we had made bar virtual. This means that its implementation may be overridden by a subclass.

    ...
    virtual void bar() {
    ...

As a virtual function, foo->bar() does a vtable lookup for the runtime type of foo, in case bar() has been overridden. Since foo is NULL, the program now crashes at foo->bar() instead, all because we made a function virtual.

int main() {
    Foo *foo = NULL;
    foo->bar(); // crash
    foo->baz();
}

NULL has made debugging this code extraordinarily difficult and unintuitive for the programmer of main.

Granted, dereferencing NULL is undefined by the C++ standard, so technically we shouldn’t be surprised by whatever happened. Still, this is a non-pathological, common, very simple, real-world example of one of the many ways NULL can be capricious in practice.

7. NULL is non-composable

Programming languages are built around composability: the ability to apply one abstraction to another abstraction. This is perhaps the single most important feature of any language, library, framework, paradigm, API, or design pattern: the ability to be used orthogonally with other features.

In fact, composibility is really the fundamental issue behind many of these problems. For example, the Store API returning nil for non-existant values was not composable with storing nil for non-existant phone numbers.

C# addresses some problems of NULL with Nullable<T>. You can include the optionality (nullability) in the type.

int a = 1;     // integer
int? b = 2;    // optional integer that exists
int? c = null; // optional integer that does not exist

But it suffers from a critical flaw that Nullable<T> cannot apply to any T. It can only apply to non-nullable T. For example, it doesn’t make the Store problem any better.

  1. string is nullable to begin with; you cannot make a non-nullable string
  2. Even if string were non-nullable, thus making string? possible, you still wouldn’t be able to disambiguate the situation. There isn’t a string??

The solution

NULL has become so pervasive that many just assume that it’s necessary. We’ve had it for so long in so many low- and high-level languages, it seems essential, like integer arithmetic or I/O.

Not so! You can have an entire programming language without NULL. The problem with NULL is that it is a non-value value, a sentinel, a special case that was lumped in with everything else.

Instead, we need an entity that contains information about (1) whether it contains a value and (2) the contained value, if it exists. And it should be able to “contain” any type. This is the idea of Haskell’s Maybe, Java’s Optional, Swift’s Optional, etc.

For example, in Scala, Some[T] holds a value of type T. None holds no value. These are the two subtypes of Option[T], which may or may not hold a value.

Option[T], Some[T], None

The reader unfamiliar with Maybes/Options may think we have substituted one form of absence (NULL) for another form of absence (None). But there is a difference — subtle, but crucially important.

In a statically typed language, you cannot bypass the type system by substituting a None for any value. A None can only be used where we expected an Option. Optionality is explicitly represented in the type.

And in dynamically typed languages, you cannot confuse the usage of Maybes/Options and the contained values.

Let’s revisit the earlier Store, but this time using ruby-possibly. The Store class returns Some with the value if it exists, and a None if it does not. And for phone numbers, Some is for a phone number, and None is for no phone number. Thus there are two levels of existence/non-existence: the outer Maybe indicates presence in the Store; the inner Maybe indicates the presence of the phone number for that name. We have successfully composed the Maybes, something we could not do with nil.

cache = Store.new()
cache.set('Bob', Some('801-555-5555'))
cache.set('Tom', None())

bob_phone = cache.get('Bob')
bob_phone.is_some # true, Bob is in cache
bob_phone.get.is_some # true, Bob has a phone number
bob_phone.get.get # '801-555-5555'

alice_phone = cache.get('Alice')
alice_phone.is_some # false, Alice is not in cache

tom_phone = cache.get('Tom')
tom_phone.is_some # true, Tom is in cache
tom_phone.get.is_some #false, Tom does not have a phone number

The essential difference is that there is no more union–statically typed or dynamically assumed–between NULL and every other type, no more nonsensical union between a present value and an absence.

Manipulating Maybes/Options

Let’s continue with more examples of non-NULL code. Suppose in Java 8+, we have an integer that may or may not exist, and if it does exist, we print it.

Optional<Integer> option = ...
if (option.isPresent()) {
   doubled = System.out.println(option.get());
}

This is good. But most Maybe/Optional implementations, including Java’s, support an even better functional approach:

option.ifPresent(x -> System.out.println(x));
// or option.ifPresent(System.out::println)

Not only is this functional way more succinct, but it is also a little safer. Remember that option.get() will produce an error if the value is not present. In the earlier example, the get() was guarded by an if. In this example, ifPresent() obviates our need for get() at all. It makes there obviously be no bug, rather than no obvious bugs.

Options can be thought about as a collection with a max size of 1. For example, we can double the value if it exists, or leave it empty otherwise.

option.map(x -> 2 * x)

We can optionally perform an operation that returns an optional value, and “flatten” the result.

option.flatMap(x -> methodReturningOptional(x))

We can provide a default value if none exists:

option.orElseGet(5)

In summary, the real value of Maybe/Option is

  1. reducing unsafe assumptions about what values “exist” and which do not
  2. making it easy to safely operate on optional data
  3. explicitly declaring any unsafe existence assumptions (e.g. with an .get() method)

Down with NULL!

NULL is a terrible design flaw, one that continues to cause constant, immeasurable pain. Only a few languages have managed to avoid its terror.

If you do choose a language with NULL, at least possess the wisdom to avoid such awfulness in your own code and use the equivalent Maybe/Option.

NULL in common languages:

Language NULL Maybe NULL Score
C NULL
C++ NULL boost::optional, from Boost.Optional
C# null
Clojure nil java.lang.Optional
Common Lisp nil maybe, from cl-monad-macros
F# null Core.Option
Go nil
Groovy null java.lang.Optional
Haskell Maybe
Java null java.lang.Optional
JavaScript (ECMAScript) null, undefined Maybe, from npm maybe
Objective C nil, Nil, NULL, NSNull Maybe, from SVMaybe
OCaml option
Perl undef
PHP NULL Maybe, from monad-php
Python None Maybe, from PyMonad
Ruby nil Maybe, from ruby-possibly
Rust Option
Scala null scala.Option
Standard ML option
Swift Optional
Visual Basic Nothing

“Scores” are according to:

Does not have NULL.
Has NULL. Has an alternative in the language or standard libraries.
Has NULL. Has an alternative in a community library.
Has NULL.
Programmer’s worst nightmare. Multiple NULLs.

Edits

Ratings

Don’t take the “ratings” too seriously. The real point is to summarize the state of NULL in various languages and show alternatives to NULL, not to rank languages generally.

The info for a few languages has been corrected. Some languages have some sort of null pointer for compatibility reasons with runtimes, but they aren’t really usable in the language itself.

  • Example: Haskell’s Foreign.Ptr.nullPtr is used for FFI (Foreign Function Interface), for marshalling values to and from Haskell.
  • Example: Swift’s UnsafePointer must be used with unsafeUnwrap or !.
  • Counter-example: Scala, while idiomatically avoiding null, still treats null the same as Java, for increased interop. val x: String = null

When is NULL okay

It deserves mentions that a special value of the same size, like 0 or NULL can be useful when cutting CPU cycles, trading code quality for performance. This is handy for those low-level languages, like C, when it really matters, but it really should be left there.

The REAL problem

The more general issue of NULL is that of sentinel values: values that are treated the same as others, but which have entirely different semantics. Returning either an integer index or the integer -1 from indexOf is a good example. NUL-terminated strings is another. This post focuses mostly on NULL, given its ubiquity and real-world effects, but just as Sauron is a mere servent of Morgoth, so too is NULL a mere manifestation of the underlying problem of sentinels.

Interested in learning more about our software? Sign up for Lucidchart free here. Want to join our team? Check out our careers page.

This entry was posted in Architecture, Behind the Scenes. Bookmark the permalink.

30 Responses to The worst mistake of computer science

  1. OldFart says:

    It sounds like the problem is not with NULL But with the C/C++/Java-style implementation of it.

    For example, Common Lisp has NIL (not “nil”), which you call CL’s NULL value, but it’s very different from the similarly named C/C++/Java concept. #3 and #5 don’t even make sense in CL. I don’t think #6 or #7 apply, either. #1 and #2 don’t really mean much, either, since CL doesn’t make any promises about types, unless you explicitly assert them (in which case you can easily assert non-NIL-ness, too).

    What’s left? “NULL makes poor APIs.” Hmm, maybe. As a Lisp programmer, though, it doesn’t seem so bad, since none of those other problems really exist for us. Besides, we have multiple return values, which helps avoid many of the API problems you cite. Just as NULL “exacerbates poor language decisions”, its trouble can be muted by *good* language design decisions.

    In philosophical terms, NULL is a weird case, but in practical terms, it’s not really a problem for me (as long as I stay outside of the C/C++/Java world). On a daily basis, I’m much more troubled that, say, exception objects aren’t class instances, or that defstruct pollutes my namespace.

  2. Peter says:

    In the Java section,

    if (str == null || str == “”) {
    }

    doesn’t do what you think it does. In fact, I don’t think that second clause can ever be true b/c it’s testing if it’s the same object as the empty string object you’re just now instantiating inline. Needs to be:

    if (str == null || str.equals(“”)) {

    }

    But the “str == null” does what you want! Which I guess proves the main point about null muddying the waters yet again. (In Java, null works for physical equality in places where you’re trying to use equality by value.)

  3. Brendan Zabarauskas says:

    Great article, but by your standards, Haskell should also be four stars, because it has Foreign.Ptr.nullPtr, which is basically like Rust’s std::ptr::null, and basically just used for FFI bindings. So either Rust should be 5 stars, or Haskell should be 4.

  4. I can’t say for certain about the other languages, but the Python “None” value is NOT the same as null. None has a value (None), it has a type (NoneType), it can be introspected, it can be composed, and so on.

    Of course, it doesn’t do very much, but it’s semantically similar to lots of other small classes that don’t do much.

  5. Justin Heyes-Jones says:

    Nice exposition.

    BTW there’s a typo here:

    “Options can be regardless as a collection”

    Should be “regarded” I think.

    J

  6. Rust should get 5 stars since you have to use `unsafe` in order to use `std::ptr::null`, and in general it is discouraged to use `unsafe` if you’re not interacting with other C APIs.

  7. Paul Draper says:

    Peter, Brendan, thanks for the corrections.
    It’s been fixed :)

  8. Dan D. says:

    I think you’re being too harsh on Objective-C there. nil, Nil, and NULL are all the same null (differentiated solely for contextual readability), and NSNull isn’t a null at all. Messages to nil/Nil/NULL are no-ops if they don’t return anything, and return nil/Nil/NULL, 0, 0.0(f) or zeroed structs if they do.

    Behaviorally, NSNull is more similar to classical trapping nulls (you get a runtime exception trying to message it anything it doesn’t respond to, which is better defined than in C), but it’s used where a nil/Nil/NULL can’t be used. In the Store example above, Bob’s phone number could be set to [NSNull null] rather than nil. And it is composable in the other direction that you can have an NSNull property whose value is nil. Given that Objective-C is a dynamic language that barely pretends to be static (replace all your object pointer types by id, and everything will still compile and run fine, with maybe a few more warnings), that’s pretty good.

  9. Nick says:

    Kotlin and Ceylon are some neat looking up and coming languages that handle empty values in a nice way – possibly worth adding to your list at the end there to highlight good alternate approaches.

  10. yarovon says:

    Go’s nil is somewhat different to C-style NULL: it is the zero value for certain types — it is to these types as 0 is to int and “” is to string.

    As such, it has none of the issues described in the article and is a different beast entirely.

  11. Joe Swinbank says:

    “Optionals” are already built into C:
    void f( int required, int* optional ).

  12. Joe says:

    > Rust should get 5 stars since you have to use `unsafe` in order to use `std::ptr::null`, and in general it is discouraged to use `unsafe` if you’re not interacting with other C APIs.

    Swift should get 5 stars too, for the same reasons. UnsafePointer is only for C interop, and native Swift types aren’t nullable.

  13. The worst mistake in computer science has been the use of “Foo” and “Bar” as example classes/methods instead of a more natural pattern.

  14. Kevin Chen says:

    Great post — just a small nitpick. The more idiomatic way to write C# is:


    if (String.IsNullOrEmpty(str)) {
    }

  15. GregW says:

    This article and its list of languages is woefully incomplete without a discussion or sub-discussion of SQL and its use of NULLs and the intersection of that with the languages discussed.

  16. Jeremie says:

    I think, to be fair, that C# should get a 3 stars, considering there is librairies out there, that allow you to emulate an equivalent of “Maybe” to avoid the null value in your programs :)

    Regardless, great article.

  17. Ron Aaron says:

    Thanks for the interesting article! You caused me to add two new words to 8th (http://8th-dev.com) to check for existence of array or object keys…

  18. Frode Nilsen says:

    C# should have 4 stars with the Nullable operator https://msdn.microsoft.com/en-us/library/2cf62fcy.aspx

    Also, in C# 6 you can avoid NullReferenceExceptions elegantly with the null propagation operator http://davefancher.com/2014/08/14/c-6-0-null-propagation-operator/

    This makes C# one of the better languages to work with NULLs in

  19. Baz says:

    Peter: you’re wrong about that string comparison. String literals in java are automatically interned, so eg “”==”” will work – you don’t get two separate instances. str==”” will work if str is an interned string (ie it was explicitly interned, or was assigned from a literal). JLS 3.10.5:

    Moreover, a string literal always refers to the same instance of class String. This is because string literals – or, more generally, strings that are the values of constant expressions (§15.28) – are “interned” so as to share unique instances, using the method String.intern.

  20. Brad says:

    C# should have 4 stars as it has support for your proposed solution (since .NET version 2.0… which came out in 2005) via the Nullable struct.

  21. Mohit Keswani says:

    Great article and how Option or Guava library can solve it.

  22. Title says:

    The title is misleading to attract traffic. NULL has nothing to do with computer science.

  23. ZipBooks says:

    That intro would have killed my ability to read the whole article except each pain point cut deeper (and was more obscure) than that last. How long did you sort those until you got the perfect order?

    If anyone is up to the challenge of rewriting the fundamentals AND having them adopted sans NULL, it would have to be Paul. :)

    Just the kind of read I would expect coming out of Lucid’s dev team.

  24. Derrick says:

    “confusing sources of sloppy nullish behavior” – my favorite

  25. Rob says:

    The problem is that null will never go away as an implementation detail, at least in my world of high performance coding. In C/C++, there’s no good way to implement optional types without RTTI (which is a terrible idea) or carrying around extra state in a tagged union or special type. With null pointers, an address is the same size as a register and can be tested with simple instructions. The need for simplicity only gets worse as you move down to assembly.

    Although that being said, we still use things that look awfully like optional types in high level APIs. In a project I’m working on, expected errors (we dont use exceptions) get returned in a struct that looks like this:

    struct RetVal
    {
    enum Error { None, ResourceNotFound, etc };
    Ret* val;
    }

    It just happens to be in a form more suited to actually execute on computers.

  26. Omar De La Rosa says:

    Excelent Post.

  27. Philip Oakley says:

    Null in this context has the same problem as infinity does for mathematics (e.g. the set of integers (0, 1, …, N, N+1, inf).

    It’s a special case that shouldn’t exist based on one set of rules, but is felt essential based on a wider rules.

    Null is just the limit of 1/N as N tends to infinity; it’s sort of zero, but not actually zero. (see http://web.maths.unsw.edu.au/~norman/papers/SetTheory.pdf for the mathematicians side of the coin)

  28. Yves says:

    FYI NULL in “C” is not a value but a character it’s from the ASCII character set. When you hit the “NULL” key the terminal sends:
    START BIT
    THE VALUE 0
    STOP BIT

    In comparison if you hit the “ZERO” key the terminal sends:
    START BIT
    THE VALUE 48
    STOP BIT

    Because the NULL character has a Boolean value of false it is very easy to detect the end of the string:
    if(!string(x)) // indicate end of string

    also CR(\r) and LF(\n) are often used as string terminator they are not. They can be used anywhere withing a string they are simply control for the display or printer to move the cursor. { note: the \n should read “next line” and not “new line”}

    In “C” the compiler use the “;” as the end of string , this allows multiple string: per physical line:
    ex- a=1; b=3; c=a+b; if(c==3) c=0;

    or you can have a string spanning many physical line:
    ex- typedef structure NewType
    {
    int alpha, beta,
    long foo,
    floating bar
    };

    bottom line is it’s up to the developer to decide what he will use as an end of line/string character.

  29. Hello Paul for reasons obvious to me,
    I don’t agree with your NULL point of view

    The text found on this web page saves me a lot of typing why NULL is essential.
    http://everything2.com/title/null

    void <- Your examples with C++ and C, because these are unstructured languages, use only when there is no other option e.g. low level development and then even reconsider.

    void <- Your examples in Javascript. Javascript is a stinking unstructured pest, making web programming a real nightmare, please replace e.g. with Dart or Python. Your surely can't be using Javascript to demonstrate proper NULL handling?

    Algol W -anyone still using this language?- has NULL and appropriately so.
    Carpal tunnes syndrome? Write a macro to combine NULL/Empty conditions.

    How would you like a (relational) database system without NULLs? For example: is a zero value in a column AMOUNT == 0 or has it not (yet) been entered?

    btw: null (optionals) treatment in iOS/OSX Swift is excellent.

    Kind Regards
    Ted

  30. Paul Draper says:

    Ted, that link says

    A vitally important part of nearly any programming language. While computer programs normally think in terms of particular values, sometimes you need to express the lack of a value.
    This sort of effect can be kludged via magic numbers like -1 or other arbitary values, but it’s better if your language of choice does it internally, eliminating any possible ambiguity.

    I agree completely with that. My point is that NULL is also such a “kludgy” value, that causes semantics to contradict types (whether dynamic or static). Maybes/Options are the “non-kludgy” solution to optionality.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>