Sniff is a "Scratch-like" programming language that's designed to help Scratchers move gently from Scratch to more conventional languages. They can start writing programs, without having to learn a new language because Sniff is based on Scratch. They learn a little more about variables, compiling, syntax errors (!), and they can have fun controlling real hardware while they're doing it.

Saturday, 22 November 2014

More Weather - there's a lot of it about!

Build a machine to measure wind speed said the teacher... OK. In that case I'm going to make my machine using this anemometer I just happen to have in one of the many boxes of "interesting components" that are lying around Sniff labs. Of all places it came from Maplin (kind of a UK Radio Shack), who typically charge twice as much as anyone else for stuff, but in this case they had this as a spare part for their weather stations, currently retailing for £2.49. That's not a typo - stupid cheap. They also have  a rain guage for £4.99.


It has a simple switch so that once per revolution it closes a switch. Connecting one side of the switch to ground, we wire the other side to an Arduino, with a pull up to 5v.  Of course I did this using the preferred 3pin dupont header, so it plugs straight into a sensor shield!


From there it seemed pretty obvious to use an i2c lcd display (my current favourite - use it in every project device). I powered it from a USB mains charger, and put the wires through the window, out on to the balcony where the anemometer was installed.


make anenometer digital input 2
make count number
when start
.forever
..wait until anenometer
..wait 0.01 secs
..change count by 1
..wait until not anenometer
..wait 0.01 secs

The code to actually process the input is nice and simple - wait for the switch to go high, increment the count, wait for it to go low again. There's a couple of delays in there incase the switch bounces. I've not fine tuned them but it works pretty solidly.

make i2c device
make display lcdi2c device
make message string

make fastest number

when start
.set fastest to 0
.forever
..
..set count to 0
..wait 4 secs
..set count to count / 4
..
..if count > fastest
...set fastest to count
..tell display to "clear"
..set message to join "now:"[count]
..tell display to "show"
..set message to join "max:" [fastest]

..tell display to "show"

As we're only measuring whole revolutions, we average the speed over 4 seconds. That seemed a good compromise - to short a measuring period, and we wouldn't be as accurate, but any longer and we might be averaging the speed over several gusts. It would be interesting to know if there's an official way to do this - our local "real" weather station records both average speed and maximum gust speed, so there's clearly more we could do here.

We record the maximum speed as well as displaying the current speed. This is revolutions per second. I happen to know (because I looked it up, but you could measure the circumference) that one revolution per second is about 1.5mpg. Over the few days we've been running it the max recorded each days has been about 6.5 or 7 revolutions, which is about 10.5mph. Converting to metric thats about 16.5kph.

Checking with the official records for the last few days:


Wind speeds have been peaking at round 17kph each day!! There are a couple of spikes we missed, but our anemometer's location is on the balcony of the Sniff Mansion, which was chosen for convince and safety of installation, rather than accuracy! I'm calling that a pretty good win.

Just for fun I added a ds18 to record min and max temperature, and of course we could easily log all this to an SD card, but maybe thats for next time!

Tuesday, 11 November 2014

Ray Tracing in Sniff (on Arduino and Pi).

There are two ways of producing 3D computer graphics: Ray Tracing and Scanline. Scanline approaches draw one object at a time, while ray tracing draws on pixel at a time. For big scenes like the ones used in movies this makes Scanline more efficient as you don't need to hold millions of objects in memory - just the one you're drawing. Only in the last few years have machines got powerful enough to ray trace significant amounts of movies - almost all the Pixar stuff to date has been Scanline. However in the last few years ray tracing has finally made it to the movies, as it can produce some optical effects which are hard to do otherwise - its just slow.

However to make 3D graphics on an arduino we have a different problem - we can't hold the pixels in memory! That makes scanline pretty much impossible. However if we're prepared to wait, and keep out scene simple enough that it can fit in memory, then ray tracing is perfectly possible.


make spheres list of number
when makeScene
.repeat 10
..add pick random -15 to 15 to  spheres #X
..add pick random -15 to 15 to  spheres #Y
..add pick random 40 to 80 to  spheres #Z
..add 1*(pick random 1 to 10) to spheres #Radius
..add 0.01*(pick random 0 to 100) to spheres #red
..add 0.01*(pick random 0 to 100) to spheres #green
..add 0.01*(pick random 0 to 100) to spheres #blue

We can make a simple scene by storing the parameters to describe 10 spheres in a list. Ideally we'd use one list for each parameter (sphereX, sphereY etc) but that would use up too much memory. Storing them like this only makes one list, so it will fit on an Uno.

Our main program then becomes:

.set displayX to  0
.repeat until displayX>xRes
..set displayY to 0
..repeat until displayY>yRes
...set originX to 0
...set originY to 0
...set originZ to 0
...set dirX to displayX-(xRes/2)
...set dirY to displayY-(yRes/2)
...set dirZ to imageDist
...broadcast normalizeDir and wait
...
...broadcast trace and wait
...
...tell display to "setPixel"
...change displayY by 1..
..change displayX by 1

For each pixel in the display we work out a vector dir[XYZ], that a ray from the camera (at origin[XYZ]) would travel along. We set the z component to represent how far the screen is from the viewer, and adjusting that will control field of view. Then we normalise dir so that it has a length of 1 (not strictly necessary, but usually a good idea). We then call the trace script to figure out what the light along that ray will look like.

when trace
.repeat 2
..broadcast intersectScene and wait
..if bestID =0
...stop script
..

The important part here is that trace calls intersectScene to figure out what object the ray it. If it didn't hit anything, then it stops. If it did hit something, then we need to figure out its colour. To do that we'll need more information:

..set hitID to bestID
..set hitX to originX+bestT*dirX
..set hitY to originY+bestT*dirY
..set hitZ to originZ+bestT*dirZ
..set oX to item bestID+0 of spheres
..set oY to item bestID+1 of spheres
..set oZ to item bestID+2 of spheres
..set oR to item bestID+3 of spheres
..set nX to (hitX-oX)/oR
..set nY to (hitY-oY)/oR
..set nZ to (hitZ-oZ)/oR
..set nDotI to -1*(nX*dirX+nY*dirY+nZ*dirZ)
..set vVecX to -1*dirX
..set vVecY to -1*dirY
..set vVecZ to -1*dirZ
..set refX to (dirX+2*nDotI*nX)
..set refY to (dirY+2*nDotI*nY)
..set refZ to (dirZ+2*nDotI*nZ)

instersectScene calculates bestT, which is the distance along the ray until we hit something, so we can find hit[XYZ] by moving along the ray from the origin. Now we know where we hit the sphere, we can find the surface normal (the vector pointing directly away from the surface), by finding the vector from the centre of the sphere to the hit point (and dividing by radius to normalise it).

nDotI is usefull, as it tells us to what extent the surface is facing the viewer. vVec is the vector from the hit point towards the observer, and ref[XYZ] is the mirror reflection direction.

Any light hitting the surface is going to be attenuated by the surface colour, so 
..set weightR to (item hitID+4 of spheres)*weightR
..set weightG to (item hitID+5 of spheres)*weightG
..set weightB to (item hitID+6 of spheres)*weightB

Lets assume there's a little bit of light hitting the surface randomly, just because light bounces round in the real word. We call his Ambient in computer graphics - its a bit of a bodge, but it stops black parts of the scene being completely black, and we just add a little bit of this to the pixelColor

..change pixelR by 0.1*weightR
..change pixelG by 0.1*weightG
..change pixelB by 0.1*weightB

For more advanced surface illumination we need some lights. We can store them in a list just like we did the spheres:
..repeat until lightCount > length of lights
...set lightX to item lightCount of lights
...set lightY to item lightCount+1 of lights
...set lightZ to item lightCount+2 of lights
...set lightPower to item lightCount+3 of lights
...change lightCount by 4
...

And now we calculate a new ray from the hit point to the light:
...set originX to hitX
...set originY to hitY
...set originZ to hitZ
...set dirX to lightX-hitX
...set dirY to lightY-hitY
...set dirZ to lightZ-hitZ

We can use that vector to calculate how much of the lights energy hits the surface:
...set lightAtten to lightPower/(dirX*dirX+dirY*dirY+dirZ*dirZ)

Imagine a sphere, with a point light source at its centre. All of the energy from the source hits the inside of the sphere. The energy s shared over the sphere's surface area. If we doubled the radius of the sphere, its surface area would increase by 4 - area is proportional to radius squared, so we get the inverse square law - lights get dimmer with the square of distance.

Now we normalise again, and calculate N.L
...broadcast normalizeDir and wait
...set nDotL to (nX*dirX+nY*dirY+nZ*dirZ)

N.L tells us if the surface is facing the light source - if its not we move on.

...if(nDotL > 0)
....broadcast intersectScene and wait
....if bestID=0
.....set hVecX to vVecX+dirX
.....set hVecY to vVecX+dirY
.....set hVecZ to vVecX+dirZ
.....set len to sqrt of (hVecX*hVecX+hVecY*hVecY+hVecZ*hVecZ)
.....set hVecX to hVecX/len
.....set hVecY to hVecY/len
.....set hVecZ to hVecZ/len
.....set nDotH to (nX*hVecX+nY*hVecY+nZ*hVecZ)
.....if nDotH>0
......set nDotH to 10^ of (10* log of nDotH)
......change pixelR by lightAtten*nDotL*weightR*nDotH
......change pixelG by lightAtten*nDotL*weightG*nDotH
......change pixelB by lightAtten*nDotL*weightB*nDotH

If the surface is  facing the light, we fire the ray into the scene, and hope it doesn't hit anything. If it did then we're in shadow. If we get this far we know we know the light actually hits the surface we we need to calculate how much is going to get reflected towards us - this is called the BRDF.

There are lots of different ways of calculating this - different surfaces have different BRDFs. It's what makes metal and and plastic look different, even when they're the same colour. Here we're using a simple metalic style BRDF.

We know the direction of the viewer and the light. To get a perfect reflection of the light towards the viewer, then surface normal would have to be exactly half way between them (angle of incidence=angle of reflection). But chances are this isn't the case. Instead the question we can ask is what would N need to be to get a perfect refection - we calculate this and call it hVec.

Now we need ask now similar are N and H? It turns out that's really easy to calculate using a dot product. A good way of thinking of the dot product is "how alike are these two vectors?". 1 means they're the same, 0 means 90 degrees apart, -1 means opposite (assuming they're normalized). So we take the dot product. Raising that to a power means that we get a value near 1 only when they're very similar. Then we use that to add some more colour to the pixel.

Having calculated the "local" illumunation - light from light sources hitting the surface, we add some "global" illumination - light which bounces of more than one surface. If we were doing this in C we might use recursion to call trace again, but its actually more efficient to just set up the new direction and go back round in a loop:
..
..set dirX to refX
..set dirY to refY
..set dirZ to refZ
..broadcast normalizeDir and wait


The actual intersection code is pretty simple - we just go through each sphere in turn, and check if the ray hits it. If it does, then we see if its closer than the closest hit we've found so far. We also check if its not too close to the stating point - if we're firing a ray from the surface of a sphere, we don't want to hit that sphere due to rounding errors.

As for the actually sphere intersection itself - it looks complex, but its straight out of the textbooks so I won't discuss that there.

The final interesting bit of code is in calculating the pixel colours. So far we've been adding up pixel[RGB], and we expect to have a value somewere between 0 and 1 (though values higher than 1 are totally OK too!), but in Sniff we use colour values for each channel as whole numbers between 0 and 7 - this is clunky, but means you can set rough colours quickly and easily... if we think of a better way then we'll use it. To turn our light value into a Sniff colour we use the code:
...set pixelR to round (pixelR*7+(pick random 0 to 100)*0.01-0.5)
...if pixelR<0
....set pixelR to 0
...if pixelR>7
....set pixelR to 7
(repeat for G and B)
...set displayColor to 100*pixelR
...change displayColor by 10*pixelG
...change displayColor by pixelB

We take our value and scale it to the range 0-7. Then we add a random value in the range -0.5 to.0.5, before rounding to the nearest whole value. This randomness surprisingly makes the whole image look much better, as it hides banding artefacts by turning them into noise. Statistically the error in the image is unchanged, but rather than getting blocks of pixels which are VERY wrong, we get a little noise shared evenly over the whole image, which looks MUCH nicer.

And there you have it. An Arduino ray tracer in Sniff.

As this is purely calulation diven the code works essentially unchanged on any Sniff machine. Moving the code onto a Raspberry Pi, and replacing the first line:
make display tft device
with
make display framebuffer device
and you get a version which works on Pi (or other Linux).


Running on Pi is about 100 times faster. Hardy surprising, as the Pi is running at 50 times the clock speed. However more importantly the Pi has an FPU - a floating point unit making the numerical calculations massively faster.

Here's the code

Sorry if I've had to gloss over a few parts of this - there's a lot of maths and physics involved, and Rendering is a pretty big topic to squeeze into one blog post (I could write a book... or two!), and Sniff isn't really the ideal language for it  - it would be much easier in a language with data structures and proper functions (though not the worst), but it was fun to try.

Hopefully I've explained most of what going on!

Saturday, 8 November 2014

Stroboscope

Here's a quick science experiment for a wet Saturday afternoon...

When things are moving to fast to see we can take a picture of them to capture a single moment of the movement. We can do the same thing without the camera, by simply using the camera's flash. If you're in a dark-ish room, the you'll see a single bright frozen instant of a moving subject.

But what if something was spinning - like a power drill, an engine or a wheel. It looks like a blur, so its hard to see whats going on, but if we could freeze the image every time the rotation got to the same place, it would look like it was stationary. We can do that by flashing a bright light at exactly the same speed as the object is spinning, so if an engine is rotating at 100 revs per second then flashing a light at 100Hz would make let us see the motor clearly, as if it was still. If fire it at 99Hz then on each revolution the engine will get a little further around - and it would look like the object is spinning at the difference of the two speeds - once per second!

To make this happen I hooked up a could to LED's to arduino pins 2 and 3, so we could flash them. We can also use a potentiometer connected to A0 to adjust the speed. A few lines of Sniff:


make led1 digital output 2
make led2 digital output 3
make pot analog input A0

make delay number
when start
.forever
..set delay to pot*0.2
..wait delay secs
..set led1 to on
..set led2 to on
..wait 0.005 secs
..set led1 to off

..set led2 to off

And we've got a stroboscope!
Here we've got a lego cog held in an electric drill. It's spinning fast enough that it would normally look like a blurred disk, but with the arduino slowing the movement down it looks as if its spinning slowly. Update: Try stepping through the video a frame at a time... The video frame rate is faster than the strobe, so you can see the light flashing on and off, capturing the same part of the rotation each time, while the other parts of the cycle are in darkness!!!

Adjusting the pot to control the delay changes the apparent speed of the motion. You can also play with the 0.005 second delay - making it longer makes everything brighter, but if its too long then it will look blurry.

The code could easily be developed so you could display and/or set the rate of flashing exactly, so you could measure how fast the drill is actually spinning...

Friday, 7 November 2014

Release 12: Sniffpad -the Sniff IDE, written in Sniff!

Sniff's come a long way really fast since we initially started developing it, but two really important things things have been on the TODO list since day 1. Getting a version running on Windows as something we knew was important, but getting the resources to develop it held things back until last month when we finally released version 11 for windows.

We're excited now to be able to be able to tick off the other big outstanding feature: Sniff now has an IDE! Getting an IDE running was tricky because, while writing command line code that will run on Mac, Linux and Pi is relatively easy (Windows is a bit harder, hence delaying Win32 Sniff), every platform has its own way of drawing on the screen. It's really hard to write a program which draws on the screen which runs without modification on lots of platforms. There are tools to make it easier, but they're not ideal, and often require the end user to install libraries and the like.

We wanted something that was simple, lightweight, and ran on all the platforms that Sniff ran on... We were stuck. Then last month as part of the win32 work, we wrote a Sniff device that could open a Window. Originally it was intended to be just an alternative to the Linux frame buffer device, but once it was working on win32 and X11, there was a lightbulb moment...

We could write the Sniff IDE in Sniff!

It was perfect - by definition Sniff runs on the platforms we want to run the IDE on! Not only that, but it would demonstrate that Sniff was actually pretty powerful, and could do "real" programming. Developing a program like that would shake bugs out of the system (we found a couple of bugs, and tightened a few other things up), and best of all once we released it, if you don't like it you can add feature yourself, because its WRITTEN IN SNIFF!

And here it is! It runs identically on Windows and Linix (including Pi). It will also run on Mac, but requires X11 to be installed, which isn't ideal - we're looking at how we can fix that!

Install and setup Sniff as usual, then cd to the directory you're using to keep the code in, and type "sniffpad", and off you go.

On the toolbar on the top are buttons to load and save. These bring up a dialog panel, where you can type in a new filename. You can also use up/down cursor keys here to scroll through existing files, which turns out to be pretty neat.

To run your code on the computer, first you need to compile it using the compile button, then run it with the run button. If "run" doesn't work on a Linux machine then check you have "xterm" installed. It's pretty standard so you should be able to install it using "apt-get" or "yum" if you don't have it already.

If you want to work on Arduino, you can use the next button to complile/download. The terminal opens a companion program "sniffterm", which talks to the arduino via the serial port. This is handy even if you don't use sniffpad.

Finally you can quit, which asks you if you want to save first.

That's really all there is to it! It's pretty basic, but its not really intended to be a full blown IDE - its for writing simple programs and getting them running - most of the Sniff examples are less that 30 lines of code, so fit on a single screen. Unlike the Arduino IDE we don't intend this to be the main dev tool for everyone - Sniff is command line based, and this is a layer on top. Use whichever works for you (and if you want to, use Eclipse, or Xcode to edit Sniff code!). On the other hand if you think there's something missing (copy and paste is the on thing we will be adding soon), then you can load the source for sniffpad up in sniffpad itself, and make it better!

As before Release 12 comes in 2 flavours:
Win32 Sniff includes only windows files, and uses DOS cr/lf
Generic Sniff includes all platforms with Unix style text files


Monday, 3 November 2014

Spirit of Radio: RF24 wireless comms!

There are a bunch of ways of communicating with an Arduino running Sniff. You could use ethernet, but you'd need cables. You could use a RC transmitter or IR, but but they're only one way. Wifi on Arduino is expensive. It's never a one size fits all - something cheap, fast, simple and bi-directional, so that for example I could set up the weather station at the bottom of the garden, and collect the results in the house...

Enter the RF24. These amazing little radio transceivers cost about $1 on eBay, and contain everything you need to do some pretty nifty communications. The only downside that was holding them back was that they need 7 wires to connect them to an arduino - not a problem in principle, but a pain to hook up two or three of them for me to test. Then I found the Funduino Joystick shield - about $5 from the Asian electronics online superstore of your choice. For some reason that's not clear these have an RF24 socket!?! Odd if you want a joystick, but really handy if you need to hook up several RF24's...

With the boards aquired and RF24's plugged in we're good to go. I set up two boards: one called server which listens for a message, and sends a reply back (just confirming all was well), and the other as a client which sends a message when a button is pressed.

Here's the client first:
make spi device
make transmitter rf24 device
make message string
make radioChannel number

make buttonA digital input 2

make receivedMessage string
when start
.set radioChannel to 2
.tell transmitter to "setReceiveChannel"
.forever
..tell transmitter to "readString"
..if not message = ""
...set receivedMessage to message
...say receivedMessage

Here's the first part of the program where we set up the RF24 device and tell it to listen on channel 2. Internally rf24's support multiple frequencies, and allow multiple senders and receivers to share the same channel without their messages getting mixed, but for Sniff we simplify that - when you listen on channel 2, you will receive all the messages sent on channel 2.

Having set the channel we go into a loop, and tell the transmitter to try and read the a string. If it does, then it makes a copy, and prints out the received string.

Transmitting is just as easy:
make messageToSend string
when start
.set radioChannel to 1
.tell transmitter to "setTransmitChannel"
.forever
..if not buttonA
...set messageToSend to [ timer ]
...set message to messageToSend
...tell transmitter to "writeString"
...say messageToSend
...wait until buttonA

We select channel 1 as the transmit channel, then wait for the user to press one of the buttons on the Funduino JS shield (buttons always come in handy). When the button is pressed we copy create a messageToSend, and assign it to the variable message, which we transmit using the writeString command. Then we print out the message and wait until the user stops pressing the button. The messages you send are limited to 32 characters, so keep your messages short.

All that messing around copying message to and from the other strings is because we have two scripts changing message at the same time - If one sets it to something, then the other changes it, then the first script might get confused. The way Scratch and Sniff handle this means that normally we don't have to worry about it, but occasionally it can trip you up, so to be safe we've created and displayed the message using a different variable.

Setting up the server is easier:
make spi device
make transmitter rf24 device
make message string
make radioChannel number

when start
.set radioChannel to 1
.tell transmitter to "setReceiveChannel"
.set radioChannel to 2
.tell transmitter to "setTransmitChannel"
.
.forever
..tell transmitter to "readString"
..if not message = ""
...say message
...set message to join "echo: " message
...#The other end has just finished transmitting
...#Give it 20mS to start listening again
...wait 20 millisecs
...tell transmitter to "writeString"

We set up the send and receive channels (noting we're now listening on 1 and sending on 2). We wait until we receive a message, we add the word "echo" on the begining and then send it back.

The only gotcha here is that strictly the rf24 can't be both a transmitter and a receiver... but it can switch back and forth pretty quick. Normally we set it up to listen, but when it needs to transmit it has to stop listening for a while, send the message, the switch back to listening. In this case the other end has just sent us a message, so at the instant we receive it, the transmitter is probably, desperately trying to get back into listening mode asap. It we send a message straight back, then it might not be ready, so we wait just a little while to give it a chance to get ready for us.

When you press the button on the client, it will send the current timer value to the server, which prints it out. The server then sends an acknowledge "echo" back to the client.




You can get more fancy and have multiple clients. They can both send messages to the server, and because both are listening on the same channel both see the echo replies. You could experiment with different receive channels for each client to avoid this, or simply add something into the echo, so the client can see if its intended for it.

And that's it - I'm sure we'll have lots more fun with these now that we've got device support and an easy way to hook them up.