clay

Event Handling
Login
Become a Patron!

In between work and editing conference papers, I had a brainstorm. And that brainstorm is now Clay's new event engine. What I've come up with is that:

  1. Objects in Clay will all have a common ancestor who understands all of the framework interactions.
  2. Objects during creation will bolt on additional behaviors via the mixin mechanism in TclOO.
  3. Instead of invoking each other's methods directly, objects will throw messages at each other through an sqlite database

The issue I'm looking at down the road is we are probably going to want larger Clay simulations to either be broken across multiple threads or even multiple servers. We are also going to want Clay simulations to run apart from a GUI's main thread. A simulation going hot and heavy could effectively block a webserver from responding. And users like to know that the program is just busy, not just locked up.

Now in a conventional OO program, your objects are all in one big happy family. They can invoke each other's public methods. And even do cool thinks like thread a coroutine through the method calls.

Let's take a simple interaction from my days of coding HLA, throwing a beach ball:

::scene::object create A
::scene::object create B

A throw_ball B

The global program is actually invoking A's methods directly. And we can assume that behind the scenes that throw_ball is probably doing things with B's methods:

method throw_ball target {
   $target was_hit [self]
}
method has_hit who {
   puts "Ouch"
}

When all of your objects have an instance in the interpreter, this isn't an issue. However, what if A and B are in different threads? What if that are in different processes? What if object A and B are in simulations on different computers?

Well then all of your calls, from the simplest to the most complex involve a lot of wrapping code to direct messages to the object on the other end, and await a reply.

 method tell {object what args} {
    set location [::whereis::object $object]
    if {$location eq [my location]} {
       $object $what {*}$args
    } else {
        tailcall ::rpc::send ${object}@${location} $what $args
    }
 } 
 method throw_ball target {
     my tell $target was_hit [self]
  }
  method ouchie {who} {
     puts "$who says ouchie"
  }
  method has_hit who {
     my tell $who ouchie [self]
  }

Which works, up to a point. And that point is when you have a few hundred object that were written to demand instant gratification to their answers doing all sorts of blocking while the network tries to keep up.

The model I have come up with is more like an old fashioned correspondence game of chess. The players mail their moves to one another, and don't make another move until they get the next move from their opponent.

method tell {headers content} {
    # Code to generate and sql record
   return $msg_uuid
}

 method throw_ball {who} {
    set msg_uuid [my tell [list msg_rcpt who msg_subject has_hit] {}]
    # Poll until we get a reply
    while 1 {
        my <db> eval {select * from messages where msg_reply=:msg_uuid} {
            return $msg_content
      }
      yield
    }
 }


 method step {} {
     set rcpt [my uuid]
     my <db> eval {select * from message where msg_rcpt=:uuid} record {
        my react $msg_subject [array get record]
     }
 }
 method react {subject msginfo} {
      dict with msginfo {}
      if {$subject eq "has_hit"} {
         my tell $msg_sender [list \
            msg_rcpt $msg_sender \
            msg_subject ouch \
            msg_reply $msg_uuid \
            msg_content {Ouch}]
      }
 }

All of the processes and threads running on the same machine can share a single sqlite database. For cluster processes, I could see a mechanism by which an outside process controls the timsetep, and then replicates messages over UDP or via http.

In my next installment I'll be going over the mixin stuff. For now sleep is calling....