I am investigating how a syntactic evolution of language affects its semantics. For instance, java’s for loop syntax evolved in its version 5 to a compact one. Do the designers have to prove that even with this syntax the semantics are still preserved! May be this is a trivial example.
So, in general, how can one prove that a language’s semantics are still preserved even when its syntax has evolved from very verbose to compact?
Many thanks in advance for any insights/links.
Ketan
Okay, your last comment is much more answerable.
The short answer is: You don’t. For one thing, when you add syntactic sugar you usually just capture a well-known, wide-used pattern and give it special, nicer syntax – you don’t replace large parts of the language’s syntax. For such small replacements, the translation can be formulated with informative descriptions and examples – for example, PEP 343 defines the “with” statement relatively informatively.
Now, when the change in syntax is so radical the new language has hardly anything in common with the backend language, we’re not talking about change of syntax – we’re talking about a compiler. But compilers aren’t proven correct either. Well, some people actually try it. But for real-world compilers, this rarely happens; instead testing checks the correctness, by countless users and their programs.
And of course, all serious language implementations have a wide range of test cases (read: example programs, from basic to absurd) that should run and pass (or in some cases, generate an error) at least in official releases. When they do (and the test suite is worth its salt), you still don’t know that there are no bugs, but at least it’s some confidence. As Dijkstra said: “Testing shows the presence, not the absence of bugs.”