I am writing a python platform for the simulation of distributed sensor swarms. The idea being that the end user can write a custom Node consisting of the SensorNode behaviour (communication, logging, etc) as well as implementing a number of different sensors.
The example below briefly demonstrates the concept.
#prewritten class Sensor(object): def __init__(self): print 'Hello from Sensor' #... #prewritten class PositionSensor(Sensor): def __init__(self): print 'Hello from Position' Sensor.__init__(self) #... #prewritten class BearingSensor(Sensor): def __init__(self): print 'Hello from Bearing' Sensor.__init__(self) #... #prewritten class SensorNode(object): def __init__(self): print 'Hello from SensorNode' #... #USER WRITTEN class MySensorNode(SensorNode,BearingSensor,PositionSensor): def CustomMethod(self): LogData={'Position':position(), 'Bearing':bearing()} #position() from PositionSensor, bearing() from BearingSensor Log(LogData) #Log() from SensorNode
NEW EDIT:
Firstly an overview of what I am trying to achieve: I am writing a simulator to simulate swarm intelligence algorithms with particular focus on mobile sensor networks. These networks consist of many small robots communicating individual sensor data to build a complex sensory map of the environment.
The underlying goal of this project is to develop a simulation platform that provides abstracted interfaces to sensors such that the same user-implemented functionality can be directly ported to a robotic swarm running embedded linux. As robotic implementation is the goal, I need to design such that the software node behaves the same, and only has access to information that an physical node would have.
As part of the simulation engine, I will be providing a set of classes modelling different types of sensors and different types of sensor node. I wish to abstract all this complexity away from the user such that all the user must do is define which sensors are present on the node, and what type of sensor node (mobile, fixed position) is being implemented.
My initial thinking was that every sensor would provide a read() method which would return the relevant values, however having read the responses to the question, I see that perhaps more descriptive method names would be beneficial (.distance(), .position(), .bearing(), etc).
I initially wanted use separate classes for the sensors (with common ancestors) so that a more technical user can easily extend one of the existing classes to create a new sensor if they wish. For example:
Sensor | DistanceSensor(designed for 360 degree scan range) | | | IR Sensor Ultrasonic SickLaser (narrow) (wider) (very wide)
The reason I was initially thinking of Multiple Inheritance (although it semi-breaks the IS-A relationship of inheritance) was due to the underlying principle behind the simulation system. Let me explain:
The user-implemented MySensorNode should not have direct access to its position within the environment (akin to a robot, the access is indirect through a sensor interface), similarly, the sensors should not know where they are. However, this lack of direct knowledge poses a problem, as the return values of the sensors are all dependent on their position and orientation within the environment (which needs to be simulated to return the correct values).
SensorNode, as a class implemented within the simulation libraries, is responsible for drawing the MySensorNode within the pygame environment – thus, it is the only class that should have direct access to the position and orientation of the sensor node within the environment.
SensorNode is also responsible for translation and rotation within the environment, however this translation and rotation is a side effect of motor actuation.
What I mean by this is that robots cannot directly alter their position within the world, all they can do is provide power to motors, and movement within the world is a side-effect of the motors interaction with the environment. I need to model this accurately within the simulation.
So, to move, the user-implemented functionality may use:
motors(50,50)
This call will, as a side-effect, alter the position of the node within the world.
If SensorNode was implemented using composition, SensorNode.motors(…) would not be able to directly alter instance variables (such as position), nor would MySensorNode.draw() be resolved to SensorNode.draw(), so SensorNode imo should be implemented using inheritance.
In terms of the sensors, the benefit of composition for a problem like this is obvious, MySensorNode is composed of a number of sensors – enough said.
However the problem as I see it is that the Sensors need access to their position and orientation within the world, and if you use composition you will end up with a call like:
>>> PosSensor.position((123,456)) (123,456)
Then again – thinking, you could pass self to the sensor upon initialisation, eg:
PosSensor = PositionSensor(self)
then later
PosSensor.position()
however this PosSensor.position() would then need to access information local to the instance (passed as self during init()), so why call PosSensor at all when you can access the information locally? Also passing your instance to an object you are composed of just seems not quite right, crossing the boundaries of encapsulation and information hiding (even though python doesn’t do much to support the idea of information hiding).
If the solution was implemented using multiple inheritance, these problems would disappear:
class MySensorNode(SensorNode,PositionSensor,BearingSensor): def Think(): while bearing()>0: # bearing() is provided by BearingSensor and in the simulator # will simply access local variables provided by SensorNode # to return the bearing. In robotic implementation, the # bearing() method will instead access C routines to read # the actual bearing from a compass sensor motors(100,-100) # spin on the spot, will as a side-effect alter the return # value of bearing() (Ox,Oy)=position() #provided by PositionSensor while True: (Cx,Cy)=position() if Cx>=Ox+100: break else: motors(100,100) #full speed ahead!will alter the return value of position()
Hopefully this edit has clarified some things, if you have any questions I’m more than happy to try and clarify them
OLD THINGS:
When an object of type MySensorNode is constructed, all constructors from the superclasses need to be called. I do not want to complicate the user with having to write a custom constructor for MySensorNode which calls the constructor from each superclass. Ideally, what I would like to happen is:
mSN = MySensorNode() # at this point, the __init__() method is searched for # and SensorNode.__init__() is called given the order # of inheritance in MySensorNode.__mro__ # Somehow, I would also like to call all the other constructors # that were not executed (ie BearingSensor and PositionSensor)
Any insight or general comments would be appreciated, Cheers 🙂
OLD EDIT: Doing something like:
#prewritten class SensorNode(object): def __init__(self): print 'Hello from SensorNode' for clss in type(self).__mro__: if clss!=SensorNode and clss!=type(self): clss.__init__(self)
This works, as self is an instance of MySensorNode. However this solution is messy.
The sensor architecture can be solved by using composition if you want to stick to your original map-of-data design. You seem to be new to Python so I’ll try to keep idioms to a minimum.
The architectural problem you’re describing with the actuators is solved by using mixins and proxying (via
__getattr__) rather than inheritance. (Proxying can be a nice alternative to inheritance because objects to proxy to can be bound/unbound at runtime. Also, you don’t have to worry about handling all initialization in a single constructor using this technique.)And finally, you can throw it all together with a simple node declaration:
That should give you an idea of the alternatives to the rigid type system. Note that you don’t have to rely on the type hierarchy at all — just implement to a (potentially implicit) common interface.
LESS OLD:
After reading the question more carefully, I see that what you have is a classic example of diamond inheritance, which is the evil that makes people flee towards single inheritance.
You probably don’t want this to begin with, since class hierarchy means squat in Python. What you want to do is make a
SensorInterface(minimum requirements for a sensor) and have a bunch of ‘mixin’ classes that have totally independent functionality that can be invoked through methods of various names. In your sensor framework you shouldn’t say things likeisinstance(sensor, PositionSensor)— you should say things like ‘can this sensor geo-locate?’ in the following form:This is the heart of duck-typing philosophy and EAFP (Easier to Ask for Forgiveness than Permission), both of which the Python language embraces.
You should probably describe what methods these sensors will actually implement so we can describe how you can use mixin classes for your plugin architecture.
OLD:
If they write the code in a module that gets put in a plugin package or what have you, you can magically instrument the classes for them when you import their plugin modules. Something along the lines of this snippet (untested):
You can find the modules within a package with another function.