Guido wrote up some thoughts on optional static typing. This will obviously lead to much debate, and no actual forward movement anytime soon, but (at least from my perspective) it just looks like a thought experiment, only meant to produce discussion, so it's no big deal. There's no reason this should turn into a trinary-operator-style discussion.
My initial thoughts are that the thought experiment makes it pretty clear that optional type declarations aren't a good idea. Static type declarations are hard, and there's more interesting problems to work on. More specifically, I think the problem is that static type declarations aren't closed conceptually.
By "closed", I mean that introducing one aspect of static typing will make you think about another aspect, and so on, until it's snowballed out of control. There's no easy way to handle static typing, instead you dive straight into the deepest and most unanswered aspects of typing. And none of it makes any sense for Python.
ML, for instance, can handle typing in many elegant ways. But it also (from what I can tell from my reading) has a relative paucity of built-in types. The compiler has to know about lists and tuples and structures, because all compound types have to be understood by the typing system, and the compiler verifies the typing system. In Python our basic compound types (lists, tuples, dictionaries, objects) aren't really part of the compiler -- they aren't any more magic than anything else, and pretend dictionaries (like UserDict) are generally substitutable for the "real" thing.
When Guido talks about expressions like min(a: iterable(T)) -> T how is the compiler going to know what iterable is? It's not (in the current Python) resolvable until runtime. And sure, we could "fix" its definition, not allowing iterable to be rebound. But that doesn't really solve the issue -- this only works for a small number of fixed containers, and isn't extensible for user-defined types.
In general, it's not clear what Guido is trying to achieve with this proposal. The only motivation he gives is:
Without losing the benefits of Python's dynamic typing, it would be nice if you had the option to add type declarations for your method arguments, variables and so on, and then the compiler would give you a warning if you did something that wasn't possible given what the compiler knows about those types.
This isn't a justification for type declarations, but for interface declarations. Interfaces tell you something about the method signatures, and could potentially in turn provide further contract information (e.g., IImage objects have a .histogram() method that returns a sequence of IColor objects). They don't tell you exactly what code will be run when you call a method -- that's what type declarations can provide, because given a type and a method name you can look up the specific code (not allowing for subclassing). That's really useful when you are doing lots of static analysis -- like you would in ML or Boo -- but it won't get us very far. But we don't need that level of static analysis to support source checkers (like PyChecker or PyLint) or to allow auto-completion in IDEs.
Of course, the container problem really isn't solved for interfaces either -- you could create a IIteratorInteger interface where .next() was declared to return an IInteger object, but this becomes awkward (since you then have to create an IIterableInteger interface and who knows what else). Interfaces that took parameters might be useful. From what I've seen, the current interface implementations (Zope and PyProtocols) don't provide this, but I may be mistaken.
Anyway, that current interfaces lack these features isn't as big a deal. With the kind of interfaces Python has now, the stakes are much lower. They don't fix anything into the syntax of the language. You can be a little sloppy where they aren't sufficient to express your intentions now, and maybe later you can be more explicit. Or not, if people continue not to care too much (and if people continue not to care too much, it means it's just not that important). We can come to some consensus on interfaces, and we never will with type declarations; I don't think that's a sign of a structural problem in the Python community, but rather a sign that type declaration doesn't have a well-defined problem or solution.
PyProtocols actually does include one paramerizable interface factory: sequenceOf(aProtocol) returns a protocol object representing a "sequence of aProtocol". It's left to the user to create other parameterized interfaces if they have the need. You could also subclass protocols.InterfaceClass and add a __getitem__ method to create parameterizable interfaces, although you might need a couple of other things. Hm, maybe something like:class IFactory(ParamaterizedInterface): createsType = InterfaceParameter() def create() -> createsType: """Create an instance of the appropriate type"""
and then IFactory[IFoo] would represent something whose create method returns an IFoo. Adapting to an InterfaceParameter directly would always raise an error; the idea is that the __getitem__ would somehow clone the methods and change their type signatures in the non-parameterized version of the interface.