Monday 19 November 2012

FTL without paradoxes

On The Alexandrian (a great RPG blog) there was a very interesting discussion about FTL and time travel.

I spent a fair while pondering this well-written article which explains the issue quite clearly (note how long the  discussion following it goes on for!) and came to an interesting conclusion.

The article does leave open one huge loophole for sci-fi: if there was a universal reference frame by which ALL instantaneous teleportation occurred then this doesn't contradict Relativity or lead to causal paradoxes.

In brief, if all teleportation is relative to a universal reference frame, based upon Cosmic Microwave Backround radiation, relative to which all galaxies are moving well sub-light speed, and assuming all space travel is all well sub-light speed or via teleportation relative to the CMB, then there is no paradox and any time travel is only illusional and so slight as to be effectively ignorable :-)

My explanation is as follows:
Because (1) all motion is relative, and (2) the speed of light is constant, then perception of spacetime varies amongst observers with different velocities. The simplest consequence is that someone passing you at near the speed of light will perceive distances in the direction of their travel as being contracted. They think your spaceship looks squashed, but likewise you do theirs.
One consequence of your changed perception of distance is that it changes your concept of what events occur "simultaneously".

Let's assume there's a hoop 4 light years away, stationary relative to you, and you see a basketball going through that hoop. As it is 4 light years distant to you then you consider the goal to have occurred 4 years ago. But if someone passes you at high (relative) velocity at the moment you see the goal then they see it also but then due to length contraction they may consider the event to have occurred 3 light years away, and thus consider it to have occurred 3 years ago.

Note that the ball and the hoop may be moving at wildly differing velocities, and you may disagree on the velocity of the ball and hoop, the location and timing of the event, but you both agree that the ball went through the hoop (the event itself has no velocity, it is a single point on the Minkowski diagram).

Now, assume that instantaneous travel is possible and you go to somewhere else at a point in time there according to your perception of when "now" there is (based upon your own velocity). If your fellow observer teleports to the hoop and back, while you wait for a year before teleporting to the hoop and back, you'll find you both arrive at the hoop at the same moment.

This can create a causal paradox, thus a contradiction and hence one of the assumptions is false.

It is generally taken that the assumption that you can teleport is false, whereas in fact it is the assumption everyone teleports according their own perception which has been shown to be false.

This is a loophole (useful for sci-fi!) - if there was a universal reference frame (heretic!) by which all teleportation took place then no causal paradox occurs. Anyone currently at your location teleporting to the hoop would arrive there together, but you wouldn't all agree on when it was you arrived there. You might teleport 1 year into the past when you went to the hoop, and then return 1 year into the future on the way back, or vice-versa. You could (apparently) teleport up to 4 years in the past or the future (!!!) but this
would (effectively) be only a matter of perception as you wouldn't be time travelling with respect to the universal frame.


There is debate amongst physicists on the net about a universal reference frame, based upon Cosmic Microwave Backround radiation, suggesting that the earth isn't moving very quickly with respect to it, and
presumably nowhere near fast enough for any relativistic effects. Thus so long as space travel was all well sub-light speed or via teleportation relative to the CMB, then not only would the time travel be only illusional it would be so slight as to be effectively ignorable.


Obviously someone else spotted this before me, googling I found e.g. this.


Saturday 6 March 2010

Yesterday I discovered a defect in Javascript in IE6: sometimes when an error is thrown the function simply returns instead! It occurs when you make function global by assigning it as a property of the global object, such as when you're using the module pattern:

(function() {

var GLOBAL = this;

// private functions
function foo() {\\...};

// public functions
GLOBAL.myFn = function() {\\...};

})();


I boiled it down to the following simple example:

this.myTest = function(x)
{
throw new Error("foo");
}

function runMyTest()
{
try
{
myTest("anything");
alert("Simply returned");
}
catch(e)
{
alert("caught");
return;
}
alert("Not caught");
}

In IE6 you get the "Simply returned" and "Not caught" alerts!!

Considering IE6's age I was surprised not to find any mention of this on the web, but there was a related problem reported at http://cappuccino.org/discuss/2010/03/01/internet-explorer-global-variables-and-stack-overflows/

The simplest solution for both problems (though not an example of clear code) is to simply rely upon implicit global variables, i.e.

(function() {

// private functions
function foo() {\\...};

// public functions
myFn = function() {\\...};

})();

Sunday 1 November 2009

The future of programming

Over the last couple of months I've been ruminating about what an ideal next-stage programming language would look like.

For starters it would be strongly-typed compiled language (non of this fail at runtime, impossible to debug nonsense masquerading as easy-to-write code. I spend my time modifying and debugging code written by others and that's where you need to be efficient).

Also, for ease of adoption, it would have to be a Java-based language so that you could use the existing compilers and Java libraries.

The main thrust would be to avoid focussing too much on technical language features, but to instead move towards eliminating the current need to interpret between what's in my head and what's written on the screen. That is, not to concentrate on how fantastic closures may or may-not be, but to look forward to what we want code to look like. And to know what to avoid, look at C++ template magic!! The cannonical example of what we want is the leap forward of the for-each loop.

So here's my brain dump of the things currently getting in the way when transferring either way between brain and screen:

1. Auto-immutable versions of classes (i.e. a version of "const"). E.g. you can only call accessors, not modifiers on a const reference.

2. Value-types that are as good as built-in-types. (Pass by value, operator overloading, no inheritance).

3. Return multiple values (might be specialisation of automatic n-tuples, e.g. given classes C and D, {C,D} defines a new class containing a C & a D).

4. Expressive syntax as per Linq/Groovy e.g. for(x:set).where(x.isAble)... ("where" is user-definable external to X)

5. Named parameters when calling functions e.g. setPos(x:10,y:20) rather than setPos(10,20)

6. Optional parameters, i.e. possibly null parameters, can be omitted.

7. References are not null unless marked so (to be enforced by compiler) - i.e. remove almost all checks for null.

8. Preconditions / postconditions / invariants supported e.g. by annotation (to be enforced by compiler where possible).

9. Scope of variables inside try{} extends past end of try{} to remove horrendous split of declare and initialisation.

10. Implicit try{} - i.e. declare a catch/finally and it auto-generates a try{} for the preceeding code - no more nested try blocks.

11. Auto resource-cleanup. Probably achievable by previous two points+ making close() etc. throw no exceptions, but I like the auto-pointer resource-is-initialisation pattern which declares at creation that it will be closed rather than having to remember to put it in a finally block.

12. Implicit implementation of I/F - no need to implement void fns or fns that can return null, and ability to declare default implementation in I/F.

13. Fns not virtual by default (you need to design for a function to be overridable, and currently you cannot tell which are).

14. Built in support for "copy on modify" - value-types (which can't have references to external classes, only internal value-types) must be efficient, which they wouldn't be without this.

All these points would make the code I work with on a daily basis simpler and easier to understand / work with, more explicit, with less brain/screen translation.

I'm sure that I've forgotten some points, but that's a good enough brain dump for now.

Wednesday 10 June 2009

Silence and Javascript

I recently started a new job - hence the gap in posting. Ironically, considering my post about the importance of failure at compile-time, I have been grappling with some legacy Javascript. Any error is only detected by the program running incorrectly! It's like working with hand-crafted machine code on my C64 back in the 80's... Javascript is also a lesson against design by browser-war.

Monday 6 April 2009

Resource Handling in Java: Summary

My last 5 blog entries have covered the thorny issue of how to handle ensuring resources are closed correctly. I recommended two solutions out of the various approaches. If you don't care about close() exceptions eclipsing ealier ones, go for the simple:


void foo() throws IOException {
    SomeResource r = new SomeResource();
    try {
        //do some stuff with r
    }
    finally {
        r.close();
    }
}

But don't forget to nest your try loops for multiple resources!
If you want to avoid the eclipsing problem, or you wish to avoid the nesting of try loops, then you need to write a framework API, which would enable you to write:

void foo() throws IOException {
    Closer closer = new Closer();//the new helper class
    SomeResource r1=null;
    SomeResource r2=null;
    try {
        //create and use r1 & r2
        closer.completed();
    }
    finally {
        closer.close(r2,r1);
    }
}

There are many subtle issues to decide upon when writing Closer so I have purposefully refrained from writing a specific implementation so that the argument is about the general design rather than the specifics. (For example, how we cope with classes that don't implement closable?). I may blog an example implementation at a later date.

Java Resource Handling: Java 7

So, how does all this fit in with the proposals for solving this problem for Java 7?

ARM (Automatic Resource Management)

This proposal for Java 7 by Joshua Bloch, is for a slightly enhanced version of C#s "using" statement, enhanced by allowing support for multiple resources in a single declaration without nesting, e.g. my alternative #2 above:

void foo() throws IOException {
    Closer closer = new Closer();//the new helper class
    SomeResource r1=null;
    SomeResource r2=null;
    try {
        //create and use r1 & r2
        closer.completed();
    }
    finally {
        closer.close(r2,r1);
    }
}

becomes:

void foo() throws IOException {
    do (SomeResource r1=null;
    SomeResource r2=null;) {
        //create and use r1 & r2
    }
}

(though, obviously, in either example you might create r1&r2 where they are declared). The advantages of alternative #2 all apply to ARM, and porting from this pattern to this Java 7 construct would be trivial.

BGGA

The big rival approach is BGGA Closures, which give handling close() as a prime example:

with(FileReader in : makeReader()) with(FileWriter out : makeWriter()) {
    // code using in and out
}

(where with() is a framework API). This example neatly shows the power of the proposal (no need for an extension specifically for handling close(), ability to change how exceptions thrown by close() are handled), it also shows how the code becomes more complex. Like adding templates to a language I need to spend hours reading the spec and playing with example code before I can become proficient, and still it is liable to obfuscation (what a wonderfully self-unaware word that is!).

Thoughts

Neither proposal solves the underlying problems - restricted lifetime objects as per C++ is an elegant solution that (like the baby with the bathwater) has been thrown out along with explicit memory management, to the language's detriment. It would be far from easy to retrofit this.

Secondly, neither proposal fixes close() - it is not necessary for this API to throw exceptions - the exceptions it throws should be limited to those associated with flushing the final writes, and so there should be a flush() call (that can throw) which you call before close() (which cannot), hence moving close() to finally.
In fact, both proposals side-step this problem, with ARM leaving it as an open issue, and BGGA leaving it up to you!

Let's face it though - neither you nor I are likely to be influencing the choice for Java 7 - so the key thing is to write your code so that it is clear, robust, and capable of porting to whatever proposal is implemented. Both of my recommended approaches do that, and the choice is yours. 

Java Resource Handling: Throwing the correct error

To recap - we want to create a resource, do some processing with it, then close it. If it throws an exception, then it must get closed and propogate the first exception encountered.

Naive Approach

The call to close() in our code must be within a try block (for the occasions where it needs to be supressed), and then the code gets quite complicated, e.g:

void foo() throws IOException {
        IOException exception = null;
        SomeResource r1 = null;
        try {
            //create and use r1
        }
        catch(IOException e) {
            exception=e;
        }
        try {
            if(r1!=null) r1.close();
        }
        catch(IOException closeException) {
            if(exception==null) {
                throw closeException;
            }
        }
        if(exception!=null) {
            throw exception;
        }
    }

Not nice!

Alternative #1

How about we have more than one call to close()? Then the code can be quite significantly simplified:

void foo() throws IOException {
        SomeResource r1=null;
        try {
            //create and use r1
            r1.close();
        }
        finally {
            try {
                if(r1!=null) r1.close;
            }
            catch(IOException e) { /*swallow error*/ }
        }
    }
 
This relies on it being legal to call close() multiple times (we can always have a flag to stop it being called twice, which is easy to do if it's wrapped up in a framework).
The obvious extra simplification is to write a closeSilently function:

void CloseSilently(Closeable c) {
    try {
        if(c!=null) c.close();
    }
    catch(IOException e) { /*swallow error*/ }
}

which enables us to write:

void foo() throws IOException {
    SomeResource r1=null;
    try {
        //create and use r1
        r1.close();
    }
    finally {
        closeSilently(r1);
    }
}

So let us critique this solution - slight errors in this code can lead to the worse class of defect (see my first post) - omit the first call to close() and the code will appear to work correctly until close() throws an error which is silently ignored! You don't need nested try blocks to handle multiple resources, but you do need to duplicate all the calls to close() in the same order as calls to closeSilently(). This can be solved by means of a helper framework, but still the deadly silent error is a real problem...

Alternative #2

We used a try/catch + try/catch set-up in the Naive Approach in order to detect if the resource-using code threw an exception or not. Catch isn't the only way to do; note that the last line of the try block runs if and only if no exception was thrown, and we can use this fact instead as follows:

void foo() throws IOException {
    boolean tryBlockCompleted = false;
    SomeResource r1=null;
    try {
        //create and use r1
        tryBlockCompleted = true;
    }
    finally {
        if(tryBlockCompleted) {
            r1.close();
        } else {
            CloseSilently(r1);
        }
    }
}

A helper class would make life a lot easier here:

void foo() throws IOException {
    Closer closer = new Closer();//the new helper class
    SomeResource r1=null;
    try {
        //create and use r1
        closer.completed();
    }
    finally {
        closer.close(r1);
    }
}

In Closer::close() we can assert that completed() has been called, hence in the unit tests we can ensure that the code is used correctly :-)
How about multiple resources?

void foo() throws IOException {

    Closer closer = new Closer();//the new helper class
    SomeResource r1=null;
    SomeResource r2=null;
    try {
        //create and use r1 & r2
        closer.completed();
    }
    finally {
        closer.close(r2,r1);
    }
}

Not particularly complex :-)

Alternative #3

We have a minor variation to alternative 2 above - do we have to make closing in the correct order explicit (as above), or add each closable item to closer as it is created and rely upon it to get the order correct? E.g.: 


void foo() throws IOException {
    Closer closer = new Closer();//the new helper class
    try {
        SomeResource r1 = ...
        closer.add(r1);
        SomeResource r2 = ...
        closer.add(r2);
//use r1 & r2
        closer.completed();
    }
    finally {
        closer.close();
    }
}

This alternative, whilst attractive, isn't actually any shorter - and omitted calls to Closer::add() could lead to silent runtime errors.

Alternative #4

Up until now the helping frameworks discussed have been classes that have been used (or functions that have been called) from the resource-handling code. There is an alternative approach, Inversion of control, where we pass the framework the resource-handling code we wish to run and it calls our code from within its own error handling & close calling code. In Java 6 that means creating an anonymous inner class containing our resource-handling code and passing it to the framework, e.g.:

void foo() throws IOException {
    String result = new ResourceHandler {
        SomeResource r1=...;
        add(r1);
        SomeResource r2=...;
        add(r2);
    }.execute();
}

This is quite attractively short, but looks confusing and still has the silent defect problem associated with Alternative #3 if we omit a call to add()
(See link for a paper describing a very similar approach to this).

Alternative #5

How about we change the rules, and ask for multiple exceptions to be thrown when multiple exceptions are thrown; i.e. when an exception thrown results in multiple extra exceptions when closing the resources, throw all of them.
See link for an example of this approach. This example is over-complex and the bolier-plate code can easily be hidden as per Alternatives 2,3 & 4, but this ignores the extra burden laid upon the client. Handling the array of exceptions thrown is rather over-complicated, and to what end?

Summary

Of the various solutions presented, the only one which appears to be safe to use is alternative #2, the others are too prone to silent error, or are pointlessly over-complicated.