Tuesday, December 9, 2014

After Years of C++ and Java, I Chose Python

After Years of C++ and Java, I Chose Python

Introduction

 I have been in the software business for quite a long time. When I started high-level languages (HLLs) were fairly new.  I spent a few years at Bell Labs, where C and Unix were invented, and then spent many of the following years at IBM, with a few other stops along the way.  Although in college, we were taught some HLLs, initially my jobs involved specialized assembly languages.  After quite a few years, I was able to start using HLLs in my work.  In the late 1980s and early 1990s, I was hearing a lot of buzz about object-oriented (OO) programming.  I took a graduate level course in object-oriented design and programming and I was hooked.  In this course I was introduced to Smalltalk and saw a demonstration of how productive one could be in that environment.

I believed object-oriented analysis, design, and implementation was the way to do software. Although it perhaps may not have lived up to all its hype, I still believe it is generally the best way to design and implement most systems.  I was pretty much consumed with learning everything I could about OO and becoming as much of an expert as possible.  I have over 20 books dealing with OO, including many about C++ and quite a few dealing with Java.  I went to OOPSLA and saw presentations by Grady Booch, Peter Coad, and others.  I studied various techniques for the design of OO software, especially techniques based on entity-relationship (ER) models.  I started using C++ when only preprocessors (like Cfront) were available, and then moved onto using real C++ compilers.  In the mid to late 1990s IBM made a big move into Java, and so did I. 

I have never lost my love of programming. I turned down a chance to go into management, and even when I had significant project leadership roles, I made sure I was still had a major part in writing code. A while back I retired early, but recently decided to go back into the workforce.  I wanted to find a job that combined my interest in Linux and programming.  Based on my recent (15+ years) of Java experience (especially J2EE), that would be the area that would have made the most sense for me.  Or, even though my experience was not as recent, C/C++ would be another area that would make sense since I had a lot of experience with this as well.  However, despite the fact that I had used Python for less time and mostly as a scripting language (especially using jython with WebSphere), I decided to consider that as well. I had recently started writing more scripts in Python instead of bash on my Linux systems. (I made the move to Linux about 15 years ago).

In preparing for potential jobs, including interviews and proficiency tests, I re-read various texts on Java/J2EE, C++, and Python.  Some of my Python texts were out-of-date (10 years old), so I got newer ones.  As I went through this  refresher stage, I was able to compare the languages.  Although I started out thinking I would most like to program using C++ again, after a while I came to a different conclusion: I really wanted a job where I used Python.  In this post, I will describe some of the reasons why, as well as mention some of my doubts and concerns.

Personal Perspective and Static Typing

In comparing C++, Java, and Python one should consider his or her own perspective when making comparisons.  If I have listened to a piece of music many times, and then I hear it played differently (perhaps a different tempo), it can sound, well, wrong.  It doesn't mean there is actually something wrong, but because I am used to it one way and I experience it done a different way, it doesn't seem right.  As I mentioned earlier, I have spent a lot of time in the past studying OO techniques and languages.  I have spent a lot of time learning and using C++ and Java, and there are a lot of similarities in how they do things.  In assessing Python, I had to guard against presuming that something done differently was wrong.

Another thing I had to keep in mind, was how the approach to Computer Science and programming have changed over the years.  When I studied Computer Science there was a lot of research around writing provably correct programs.  One famous topic was the famous letter by Edsger Dykstra titled Go To Considered Harmful (see this Wikipedia discussion).  We were taught that a function or any program block should have one entry and one exit point. Thus, the idea of breaking out of a loop or returning from the middle of a block was a sign of poor programming practice.  Proving even simple programs was a painstaking and tedious process, without having to deal with multiple exit points.  For many years, I religiously made sure to use a result variable rather than return early and I would structure my loops so I never had to use break.  There was even an initiative in the mid-1980s at IBM to start using program proof techniques on IBM software. This initiative died very quickly.  Proving programs has been shown to be impractical, at least on a large scale in competitive environments.

Another aspect that was de rigeur, at least for serious languages, was static typing.  You had to declare all your variables ahead of time, including their type.  The compiler would check the type at compile time (not run-time) avoiding the possibility that at run-time the wrong type (primitive type, structure, etc.) could be passed into a function for example.  Clearly this would eliminate a whole class of programming errors.  Of course, I never heard much discussion of any downsides to this approach.

In C++, and much later in Java, came or generic type support.  That is you could define some type of class, like a linked list.   You define your linked list class in terms of a generic type T.  When you declare an actual instance of the class of you specify the actual real type that replaces the generic type T. So for example,  C++, your class declaration might start like this:

       
template <class T>
class LinkedList {
public:
   LinkedList();
   ~LinkedList();
   T remove();
   void add(const T&);
   ...
}
       
 

Using such code, it is possible to create instances of LinkedList of Widget class, etc. Of course, typical implementations might have a LinkedListItem template class as well. This template class (generic type) support guarantees that we can make sure at compile-time that my LinkedList that is supposed to contain Widget instances will in fact only contain Widget instances. For a long time, I thought this type of language support was critical and I was frustrated that it was not implemented earlier in Java. 

Against this backdrop, are various more dynamic languages, including scripting languages and, well, Python.  Many people call Python just a scripting language and many people use it solely for scripting.  In fact, I have and still use it for scripting. I now use it instead of bash for anything but the simplest scripts.  There is a lot of discussion about strongly typed versus weakly typed languages. Some say Python is weakly typed and others say it is actually strongly typed.  For example, Python does check types and run time so that, for example, you can not add a number to a string without explicitly forcing a conversion.  Other even more dynamic languages will do the conversion for you.  While this may mean that technically Python is strongly typed, I do know that Python is not statically typed. That is, any type checking is done at run-time, not compile-time.

Given my education, training, and experience (mostly in enterprise software), it is surprising to me that I would choose Python, especially after all these years. In this article I will compare Python with some other languages and describe what I found so compelling.

Object-Oriented Approach

Primitive Types and Objects


C++ and Java have both primitive types, and support for objects (via classes). The fact that C++ has primitive types line char, int, etc. is not at all surprising. After all C++ was an enhancement of C and one of its tenets was that it inter-operate with programs written in C.

In Python, everything is an object.   So,  when you do the following:
       
val = 2
       
 

what you actually assigning is an object of type int, not simply a value of 2 to the name val. While this may not be as efficient in space or overhead as assigning an integer value to a variable in C++ or Java, it is consistent. Unlike other languages, with Python you know you don't have keep in mind whether a given value is a primitive type versus some type of object. Note that functions too are objects.  For example:

       

def triple(arg):
    return arg * 3

t = triple
print t(2)
       
 

results in the value 6 being printed. This makes creating callback functions trivial. Note that this works for member functions in classes as well.  When you assign an instance's member function to a name and call it, it knows the instance object as well (via the im_self variable in Python 2, or __self__ in Python 3). So you simply invoke it as usual (without having to pass in the instance yourself).

Method Overloading

In C++ and Java, method overloading is supported. That is, you can have the same method (member function) name with different numbers and types of arguments.  In actuality, internally the names are not the same. The name is mangled to include the types of the arguments. But to the programmer, the method name can be the same as long as the number or type of arguments is different. In Python, there is not a compilation phase which understands the type of arguments. So basically, without using something like decorators (more on these later), method overloading is not supported. (Note that there is an overload package that does use decorators to provide overloading support.)

Although I am used to being able to overload methods and have found it useful, using different method names (perhaps using the same common name with different suffixes) instead is not really a big deal to me.  However, there is one exception to this and that is the copy constructor.  The copy constructor is used to create a new instance of an object from another instance of the same type.  For example in C++, for a Widget class, the copy constructor would take the form:

       
Widget::Widget( const Widget & other)
{
...
}

       
 

There is no direct support for this in Python. It should be noted that technically Python does not have constructors, but instead has the __init__ method. Though not exactly the same as a constructor, the __init__ method is very similar. Like constructors, it is called to initialize a new instance of a given class following the use of the new operator (__new__ class method in Python). Of course in Python there are numerous ways one could choose to achieve the same results as a copy constructor in other languages.  One could use the most generic signature of the __init__ method:

       
class Widget(object):
    def__init__(*args, **kwords):
        ...
       
 

Using this generic approach of treating the arguments as a list, one could then, with various checks of the number (and if necessary) the type of arguments, achieve same result as having various different constructors, including a copy constructor. Using this approach one has to rely on documentation to allow the class user to differentiate between the different permutations. Another approach I have seen described, and have experimented with myself, is the use of a class static method. This method, used to create an instance of the class from another instance, would first call the class __new__ method and then copy the instance data to the new instance from the one passed in from which to copy. While this technique works, it does require one to remember (when writing the method) to invoke the __new__ method of the class.

As an alternative I actually prefer to use a decorator for the __init__ function. This decorator is a wrapper function that wraps __init__. It checks the first argument passed in, if any, and if it is an instance of the class the __init__ method is a part of, it invokes a different method, named _initcopy. Otherwise, it invokes the regular __init__ method.  A sample of this is shown below:

       
from __future__ import print_function
from functools import wraps
import copy

def with_copy_init(func):
 @wraps(func)
 def pickmethod(*args, **kwargs):
  """Wrapper for selecting initializer method"""
  if len(args) > 1:
   # For copy initializer, args[0] should be self
   if isinstance(args[1], args[0].__class__):
    # Call copy initializer. Call will automatically put self
    # as first argument
    args[0]._initcopy(args[1])
   else:
    # Call regular __init__ method
    func(*args, **kwargs)
  else:
   # Call regular __init__ method
   func(*args, **kwargs)
 return pickmethod
 


class MyClass(object):
 @with_copy_init
 def __init__(self, arg1, arg2):
  """Initializer"""
  print("*** MyClass __init__ method called ***")
  self.arg1 = arg1
  self.arg2 = arg2

 def _initcopy(self, other):
  """Copy initializer"""
  print("*** MyClass _createcopy method called ***")
  self.__dict__ = copy.copy(other.__dict__)

if __name__ == "__main__":
 inst = MyClass(1, 2)
 print("arg1 is", inst.arg1, "arg2 is", inst.arg2)
 cpy = MyClass(inst)
 print("in cpy, arg1 is", cpy.arg1, "arg2 is", cpy.arg2)
 print("Name of __init__ function is", cpy.__init__.__name__) 
 print("Docstring of __init__ function is", cpy.__init__.__doc__)
       
 

and here is the output from running this code:

       
*** MyClass __init__ method called ***
arg1 is 1 arg2 is 2
*** MyClass _createcopy method called ***
in cpy, arg1 is 1 arg2 is 2
Name of __init__ function is __init__
Docstring of __init__ function is Initializer
 

Note that to create an instance of MyClass with an existing instance, the invocation looks the same as creating the original instance. The only difference is that instead of the 2 original arguments, the sole argument is an instance of MyClass. Another interesting aspect of Python is that instance variables are put in the __dict__ instance variable (a dictionary).  So to copy in all the instance attributes from the original instance to the new one, it is sufficient to make a copy of __dict__. There may be cases where you want to be more selective in what is copied, but in many cases this is sufficient. Those who have coded in C++ and or Java will remember having to edit the copy constructor any time the set of instance variables was modified. Remembering to do this would normally not be necessary in Python using this decorator.

There are a couple of other things to note about this example. First, the wrap decorator from functools was not absolutely necessary. However, one of the nice things it does for you is copy the __name__ and __doc__ attributes from the function being wrapped (__init__ in this case) to the wrapper function (pickmethod). That way, if we reference __init__.__doc__ or __init__.__name__, we get the values we put in __init__, not what was in the wrapper function. Although this may not seem significant, it can reduce confusion when debugging.  Secondly, the decorator is called with_copy_init because it is not defining the copy initializer, but it is responsible for calling the copy initializer.  Note also that it relies on the class having a method named _init_copy. One would use this decorator if and only if they needed a copy constructor (initializer) and added the _init_copy method for this purpose.

Duck Typing

Coming from my previous work with C++ and (especially) Java, you get used to how things have to do work. If you want to register some type of callback function with another object (for example, a listener) you must define an interface for this purpose. For example, in Java one might define:

       
public interface MessageListener
{
    public void processMessage(String message);
}
       

The class that wanted to listen to messages would then have to implement the MessageListener interface and the class responsible for registering listeners would have to have a method (perhaps called addMessageListener)  that would be passed an instance of MessageListener.  In Python you can certainly do this as well. Although Python does not have special syntax for interfaces, it does have classes and you could define a simple MessageListener class and use multiple inheritance (MI) to get something akin to "implements MessageListener".  So, like other object-oriented languages, you could insist that implementers wanting to listen to messages conform to a given inheritance approach. However, this is not a requirement in Python.  Instead the code that originally handles the message could check to see if a given object had a processMessage method and if so, could just call it, assuming the code already had knowledge of this object.  This is a simple example of duck typing in Python. The name stems from the expression "if it walks like a duck, and it talks like a duck, ...".  Again, Python supports both approaches, while in Java you must use interfaces (unless you want to use something like Java introspection which is a fair amount of work). One approach requires you implement a particular type of class (or interface). If you instead inherit from a different class (or implement a different interface) which also has a processMessage method, that will not compile.  With the duck typing approach the processMessage method would still be called. I guess the question becomes do you want to insist that anyone listening to these messages declare that I am an instance of MessageListener? In Python you have a choice.

Handling of Properties

In Java, it is a common practice to access properties (attributes) of a class or instance via methods.  That is, there are get methods to get the value and there are set methods to set a value. A major reason for using such methods, rather than accessing the property directly, is encapsulation.  It is common practice in Java and C++ to declare properties as protected or private and only allow public access via methods. Part of the reasoning is that the data representation of the class may change but if you are using methods to access the data these underlying changes will not impact you. But this is only true if the name and signature of the existing methods still make sense (and are correct) in light of the change in the data representation. Often the case is that the data and the related methods all have to change.  Don't get me wrong, I believe very strongly in encapsulation.  In even fairly simple frameworks, it is extremely beneficial to be able to use objects for what they were intended, without having to know the internal details.  But in many cases the addition of get and set methods for attributes is done more by rote than for any meaningful reason.  Note there are other reasons for having get and set methods (JavaBeans, persistent objects, etc.), but I am not talking about those reasons here.

In Python it is often the practice that properties are accessed by name (i.e. directly) without using methods. In fact, there are decorators (including @property) which make it easy to access properties as simple field names even when a method is used under the covers. There is a whole discussion of the topic of how Python handles this compared to other languages here. When I first started doing extensive Python programming my first inclination was to use get and set methods for everything, as I had done in Java.  Now I no longer do so.  I think about whether the data is a property that should be publicly accessed or should be hidden, and that helps drive my decision making.  I have noticed that being able to access the properties more directly, especially when they are lists or dictionaries, has led to clarity in my code. Over time I will be able to better assess whether there is less encapsulation in my classes leading to the exposure of information that should be hidden. Please note that Python has naming conventions to identify protected and private properties, although things are not enforced to the extent they are in other languages.

One other thing I find interesting in Python is that if you have separate properties with like characteristics, instead of having to use decorators like @property, you can use a special type of class: Descriptor.  Depending on the situation descriptors can be very beneficial and reduce redundant code required when using decorators.

Context Managers

One of the things I found especially useful in C++ was using class scoping to automatically set some state when entering a class scope and reverting the state upon exit of the scope. A common example of this is using such a class to open a file at the beginning of the scope and closing it at the end.   This works because the file is opened in the constructor and closed in the destructor, at the end of the scope.  In Java this really can't be done in the same way because Java does not have destructors and the finalize method is not guaranteed to be called at the end of the scope (note I have not tried the try-with-resources block in Java 1.7).  In Python we can use Context Managers to do this.  Here is an example which works with a file:

       
with open("/tmp/tmp.dat, "w") as tempfile:
    print("hello world", file=tempfile)
    ....
# At this point the file is closed
       
 

You can write your own class in Python to be used with the with statement. You simply need to set your desired stated in the __enter__ method and restore the state in the __exit__ method. The __exit__ method may be passed an exception that was raised within that context, which it can handle or pass on. The use of context managers is useful for encapsulating the saving and restoring of state so the programmer can concentrate on what happens within that context and not worry about how to set up and restore the state of the context. The use of context manager leads to a more declarative style of programming. I will discuss this more later in this article.

Other Aspects of Python

Lists, Sets, Tuples, and Dictionaries

Many programming languages have various types of collection objects, including lists, sets, tuples, and dictionaries.  In Python these collections are used heavily throughout the language. It is common to see lists, tuples, or dictionaries returned from functions. For instance, in order to return multiple values from a function in many languages you would define a class to contain the data. In Python, you can just return a tuple containing the values. For example, suppose you wanted to return two names. Your function could simply do the following:

       
return (name1, name2)
       
 

Then, assume the calling code assigned the result of the function to the name result. You could do the following:

       
name1, name2 = result
       
 

Or you could iterate over the tuple:
       
for name in result:
    print name
       
 

Note that a key difference between tuples and lists is that they are immutable. This means that, unlike lists, tuples can be used as dictionary keys.

There is notational support for lists in Python too. For example,

       
mylist = [1, "dog", "cat", 3]
       

and again, iterating over a list is clearer than many other languages:

       
for e in mylist:
    print e
       

And of course there is notational support for dictionaries as well:

       
color_map = {1: "blue", 2: "red", 3: "green"}
for num, color in color_map.items():
    print num, "=>", color
       

You will note that the notational support adds both clarity and simplicity. It is easier to understand the code as well as remember the syntax. Of course, some other languages like Perl have similar notational support for lists, dictionaries, etc., but not languages like Java and C++.  As I got used to (and appreciated) the notational support for lists, dictionaries, etc. I thought about these other languages without such support. After all, the compilers for C++ and Java could be enhanced to support such notations.  But, then consider the whole static typing issue. Consider for instance my list example earlier. I had a mixture of numbers of and strings in the list.  Although Java does have a base object, java.lang.Object, the compiler could not assume that the type of data the coder desired in the list would be this base class. Instead the coder would likely want a list restricted to containing some more specific type. This specification must of course be declared.  (Note also that we are leaving out the whole issue of including primitive (non object) types of data in a list in languages like Java and C++.)

Here we are seeing a fundamental philosophical difference between dynamic languages like Python and statically typed languages like Java and C++. In Python you could choose to have a collection containing only a particular type of  object, but the language does not provide explicit support for declaring this and does not enforce this. If you want to enforce this you provide the code to do this yourself. Of course, there are approaches using decorators for checking types (see for example this package), plus early discussions of possible type checking approaches by Guido van Rossum, the founder of Python, here and here. At the current time I am appreciating the less restrictive style afforded by Python with the option of providing type checking as desired. Only time will tell whether I will see significant downside to this more flexible approach.

Decorators

I have mentioned decorators previously.  They are useful for adding additional functionality to functions, such as adding debug output every time a function is called. As I thought about some of the uses, I realized that some of these things were accomplished with preprocessor macros in C/C++.  Of course it would be possible to actually use a preprocessor with Python or use a package providing Python macro support such as MacroPy. You can't do everything with decorators that you could do with macros, but you can do a lot. There are many interesting modules that use decorators.  One of things I like about the decorators is that the notation (starting with '@') can add clarity to the code. It is also interesting to note that you can perform most if not all of what you can do with a macro preprocessor in Python by taking advantage of the Python Abstract Syntax Tree (AST), which is a representation of the source of a Python program.  Python allows this tree to be examined and modified before compiling, essentially allowing the structure and source of the program to be modified.  An example of its use is the ability to include pyDatalog syntax within a Python program.

Declarative Style

To be honest, in the past, I thought people describing a programming style as declarative were often using the term to mean "self documenting" just as a reason to avoid much commenting in their code. There are some languages that are declarative, such as Prolog (for now we'll just keep green versus red cuts out of the discussion--see the Bratko book). But I don't find languages like Java and C++ to be very declarative. At least C++, unlike Java, supports operator overloading which can make things a little more declarative.  Python supports operator overloading as well, but its support for declarative coding goes much further.  For example, rather than having to use something like an isMemberOf function (for example in Java) to see if some object is an element of a collection, in Python would could do something like:

       
for x in [1, 3, 6]:
       
 

which is clearly much more readable. Or say you want to iterate over both the key and associated value in a dictionary:

       
for key, value in map.items():
       
 

This is much clearer than most programming languages. While it may not be quite as declarative as something like Prolog, it is comes closer to declaring what you are doing than what one typically sees in many languages. There are numerous other examples in Python as well. I also noticed that after a fairly short period of time, I was able to write these constructs from memory more easily than with other languages because I found the notations were easier to remember than the function names (including case) necessary in other languages.

Perhaps an even better example of the declarative style of Python is list comprehensions.   Here is an example:

       
lst = [(a, b) for a in [1, 2, 3] for b in [3, 1, 5] if a != b]
print lst
       
 

which results in "[(1, 3), (1, 5), (2, 3), (2, 1), (2, 5), (3, 1), (3, 5)]" being printed. Compare this to doing this without list comprehensions:

       
lst = []
for a in [1, 2, 3]:
    for b in [3, 1, 5]:
        if a != b:
            lst.append((a, b))
       
 

Packages

 Just as Linux distributions have repositories with many thousands of packages, there are many thousands of packages (over 50,000 according to the pypi site) available for Python. While there are libraries available for Java and C++, there is just not the same kind of availability that you see with Python.  Although many of these may be small and may be of no interest to many, I have already found numerous packages that were needed and helpful, from working with spreadsheets to using ssh2 sessions.  These packages are easy to download and install, and you can start using them immediately. Python includes functions to easily build and install packages, including dependencies.  As I work on using software within a company rather than producing software products for sale (which I did for many years), I don't know whether or not there are issues with licensing using these packages when producing program products written in Python. In my current job that is not a factor.

Productivity

I have seen various estimates of productivity gains using Python versus other languages. For example, I read one comment that said many Python programs were 1/10th the size of their Java equivalents.  I also read somewhere that Python programmers could be many times more productive than in Java or C++. I have to say I was initially skeptical, especially assuming you use similar IDEs.  I used Eclipse for Java and I am am now using PyDev (which is Eclipse based) for most of my Python development. (Because I saw how useful it was, I donated to PyDev even before I got a Python job).  After a matter of months of really heavy use of Python, I have to say I am way more productive in Python.  I'm afraid I don't have metrics, but I know it takes a lot less time to get something developed in Python. Part of this is due to the fact that I have to write a lot less code, but there are many other facets that I have mentioned earlier in this post that also help lead to this result.  But as I said, my evidence is anecdotal.

In the past I worked on program projects with (often too) large numbers of programmers. Most of the time now I work by myself or at most with one other programmer. I do use source code repositories of course. I do a little  work on one rather substantial  project (in terms of function, not code size--this is Python we are talking about).  Although it was written by someone else and I have only a small involvement,  I think it showcases many of Python's strengths. It has just been open-sourced. For certain types of environments (working with lots of Linux servers) it can be a real time-saver, and I am glad the author will get some credit and recognition for what he has done.  (Information about it is available here). But because this is the only example of a fairly large program I have worked on so far, I cannot myself provide insight on creating large programs in Python, nor on how things work out with many programmers working on the project. Obviously others have created such programs and are in a better position to comment.

Conclusion

I have been using Python really heavily for only a matter of months. Although for most of my career I didn't have much choice in what language I used, I do regret I didn't switch to Python for my personal programming projects years ago.  That would have forced me to dive much more deeply into Python much earlier. Although I have read (and continue to read) books on Python, especially on object-orient techniques and doing things in a pythonic way, I feel like I am still just scratching the surface of what is possible.

I haven't been as excited about a programming language for a long time. As much as I have been using it in recent months, if it was just the initial excitement of something new, that would be over by now. The code I write is smaller, more concise, and easier to understand. I have a wealth of existing packages I can pull into my projects.  But beyond those pragmatic reasons, I really like the way the language has evolved, from everything-is-an-object to the many declarative constructs to the decorators and context managers and so on.  I suspect how it evolved to this point is largely due to the fact that has grown up in an Internet/open source environment. I think the "thousand eyes" aspect of open source probably comes into play here.  Coming from my background of mainly statically typed languages (except for specialized languages like Prolog), I'm sure I would have been predisposed to do things more like Java and C++.  I'm glad wiser heads prevailed as Python was updated and enhanced over time.

On thing I wonder about, based on my long-time experience as a programmer, is code quality and when in the project life-cycle bugs are found and eliminated.  In large programming projects you learn that the cost of fixing bugs goes up markedly as you move later in the cycle.  That is, it costs much more to fix them in system test versus earlier test phases, and much more once a product gets to the field.  Obviously, if a compiler catches errors during development that would otherwise be found much later, that is beneficial.  Those kinds of errors are caught by a compiler for a statically-type language.  The questions about this are:
  1. What percentage of these types of issues even exist when you aren't using the same type of restrictive type system, and
  2. Of those that still exist, what percentage get past the development and unit test phases?
Again, I don't have enough experience with Python to really provide an informed answer on these questions.  I think perhaps that the use of dynamic languages like Python increases the importance of unit test cases and using unit test frameworks. I believe the use of such approaches has been shown to significantly improve the quality of statically typed languages as well, but perhaps the importance is higher with dynamically typed languages.  However, there is also the greatly reduced program size (for equivalent function) and much more readable and understandable code that one gets with Python.  I don't think the importance of these facets should be underestimated.  Those that were educated in the same era I was may remember the oft-referenced Miller article about how much information one can remember.  This has always driven me to reduce the complexity of code by limiting how many details you have to understand and remember when using some object, API, etc.. At this point I wonder if the reduced program size, improved clarity, and more powerful functionality (built-in lists, dictionaries, etc.) don't far outweigh the benefits of the statically typed languages?

In summary, I know many will say I am late to the party while others will conclude that I have abandoned my roots (in enterprise programming using statically typed languages) for what they think is little more than a scripting language.  All I know is that I am excited about what I have learned so far, and am looking forward to what more I will discover and what I can produce with Python going forward. Even if for some reason I don't get to continue to use Python over the long term in my professional career, I know I can use it in my personal programming. I have already bought a book on PySide and am excited to write some GUI-based applications in the future.

No comments:

Post a Comment