I'd actually be less concerned about what a programming language
prevents me from doing than what a programming language helps me do.
The problem of poorly-written code is still problematic, but would still
happen in any language you can name.
In any case, when you write a program, you're essentially solving a
problem, and pretty much all problems follow this structure in some
Combine(Existing assumptions) -> Derived assumptions
You'll run into problems if your existing assumptions are:
3. Not inferred
For instance, the scientific method is a formalization of this used to
Actually writing down the program should be an expression of your
assumptions and organization, and the same rules apply, except in the
context of the programming language. The two most common things you'll
have problems with in poorly written code are your own inability to
infer something that the other person did, and disorganization. They
kind of feed on each other in a sense. Lack of inference alone generally
leads to a wtf error. Disorganization alone is hard for someone else to
interpret. Both of them together are a complete mess. All of those rules
are pretty much straight-forward, the exception is being disorganized,
which is more complex because it's metric is a plurality, though you
could probably nail a lot of them down from lists of usability metrics.
Programming languages, meanwhile, don't regulate per-context lack of
inference very well. Disorganization is semi-regulated. There are
structures, however, that provide organizational tools, but their own
level of organization and higher level of simplicity is the only thing
that enables their actual application (which is probably an argument for
high-level programming languages). To be honest, I would think that that
the best way to go about inferences would be to create tools that help
express them (take syntax highlighting). Another thing, people go on
rants about documentation, and when you're working on a problem, your
existing code and your existing assumptions are usually acting as
documentation for you. The more formal existence of documentation is
probably an afterthought for most of us, because the actual code acts as
informal documentation. Then there's the code can be self-documenting
part of the argument. Actually, yes, it should be. Programming languages
provide few formal means to describe organization and inferences, and
code itself is not self-organizing.
One problem is that you can move a statement independently in code
around to other lines, but there's nothing to aid us, for instance, in
knowing contextually in what sequential or organizational context it's
actually needed, or whether moving it might conflict with another task.
Sometimes that context even expands.
But yea, this is me just saying, what you're arguing is apples and
oranges, and you're missing the true underlying issues that affect both
dynamic and statically typed languages. There is no such thing as a
language that doesn't make inferences. Take it down to assembly code,
and you'll still be inferring the functionality of the instructions.
On Thu, 2009-03-19 at 15:28 -0400, Peter C. Chapin wrote:
> On Sat, 14 Mar 2009, Gary Johnson wrote:
> > Just ran across another great essay by Rich Hickey (the designer and
> > benevolent dictator of the Clojure language), in which he reminds
> > creators of contrib libraries of the wastefulness of adding classes
> > and types to a language with pervasive map abstractions. Check it
> > out. This isn't a Lisp or Clojure specific problem. The central
> > principle at work here is Alan Perlis' off-cited quote:
> > "It is better to have 100 functions operate on one data structure
> > than 10 functions on 10 data structures."
> > So dig it, and hack on.
> > http://groups.google.nl/group/clojure/browse_thread/thread/e0823e1caaff3eed
> There is a counter argument to this. The idea behind strongly typed
> languages is to allow the programmer to use types to track logically
> distinct concepts. The programmer specifically does *not* want those
> concepts mixed arbitrarly. Mixing everything with everything else just
> creates a big mess.
> For example (using Ada)
> -- Introduce two distinct types.
> type Apple_Count is new Integer;
> type Orange_Count is new Integer;
> -- Create appropriate variables.
> Apple_Basket_Size : Apple_Count;
> Orange_Basket_Size : Orange_Count;
> -- We are confused.
> Apple_Basket_Size := Orange_Basket_Size;
> The last line is a compile time error. Does it really make sense to store
> a count of oranges in a variable intended to hold a count of apples? It
> probably doesn't. If it does, an explicit type conversion can be applied:
> -- After code review, this assignment deemed safe...
> Apple_Basket_Size := Apple_Count(Orange_Basket_Size);
> In languages that allow anything to be done to anything, logic errors like
> the one above are detected (if they are caught at all) only during
> testing. There is a time and place for such languages, but they definitely
> have their disadvantages.
> So to bring this back to the original posting... treating all classes
> uniformally as maps has a certain elegance, but I wonder how many nasty
> bugs it hides.