Jake Reynolds

JakeHi, I’m Jake, a 3rd year Electronic and Information Engineering student. In this project, I’m taking a focus on the communications and low-level side of things, as that is far more what sparks my interest. This is where, for me, the real problem solving happens, and you understand how things work!

Academically, I am interested from the assembly level to high-level languages and operating systems. During 3rd year, when I was able to choose my modules, I focussed on digital signal processing, concurrent and distributed programming, and communication networks. In my spare time, I go to dance lessons (salsa!) and climbing with friends, as well as being a keen kayaker; this year I am the President of Imperial College Canoe Club, and am currently planning a summer tour to Norway!

Feel free to contact me on jr2812@ic.ac.uk, about the project or any of the work I have detailed below.

Below is a blog of my work during this project. All dates are 2015.


Thursday 18/06 : Began with a quick shop in Maplins to buy some coax cable for aerials, and to be disappointed by the lack of OLED screens in stock. Continued making payload and debugging it for the rest of the day (including another trip to Maplins as we had accidentally picked up the wrong size in the shop..). After puzzlement at the LoRa/header issue, de soldering the sensors, using multimeters and oscilloscopes, we resorted to bypassing the cobbler. Why we didn’t think of this earlier, or try these connections with the meter, is a puzzle even to us. Removing the cobbler, and attaching the screen/sensors to the header directly (soldered to a sacrificial spare header) resulted in a not pretty or elegant but exceptionally good result – it worked fine with all components together. Alex also managed to make an aerial with the components we bought, so we had a complete functioning payload for the first time.

Our first test run outside failed. Although disheartening, it was a nice cause – the batteries simply needed replacing! After that, we managed to get the PITS transmitting data from the sensors in a payload. Worryingly, Alex mentioned that the code for some of the sensors may be incorrect, as the temperature reading is incorrect. There may also be an error on my end, as although a pressure reading is fine when done alone, when done within ‘tracker.c’ or ‘lora.c’ the value is incorrect. Very odd, as all of the other sensors work correctly in these loops, and are located in the same structure.

Spent some more time developing the app, so that the graph was actually functioning in a presentable manner. After some time and a few problems (I believe there is a big omission in the MPAndroidChart documentation), I managed to get the graph updating live in response to incoming telemetry data.

Wednesday 17/06 : Majority of the day again spent on payload construction and decisions. We have decided to forgo the ‘Black Box’, as although interesting and expensive it is heavy, and essentially only duplicates what we have made without the transmission capabilities. We decided to abandon the hope of an alternative screen arriving, and attached the LCD display. After initially proposing to connect it in a similar manner to the camera, mounted below and cables passed up, Jonny/Constance had the great idea of simply putting a copper board strip through the foam, to serve both as a mount and connector. It worked amazingly well, and we did the same for the camera. The rather awkward cobbler and its ribbon cable were frustrating, but there seemed no good alternative.

After we had the whole system connected, time to test! Anddd it doesn’t work. After changing some pin configuration we managed to get the screen to work correctly, but attaching the header to the Pi stops LoRa from working. This is very frustrating.

Spent a lot of time in the evening writing up a report.

Tuesday 16/06 : In bed last night, I had had an idea! If I was sending the data to the app, why not also try and have a go at the visual representation? This was sparked by my interest in app writing, and something new and interesting that I could do in a short space of time. Indeed, a morning. I tried two different Android graph libraries, but settled with MPAndroidChart. It seemed to have good documentation and was easy to integrate. So, I created a new tab on the app, gave it a graph layout, and adapted other fragments code to make my GraphFragment. Not quite understanding the app fully (and wanting to try out file access with Android), I transferred data via a file. The MessageHandler wrote incoming telemetry to a file, and my graph app read this data, parsed it, and displayed it. Not bad for a morning’s work.

In college, more frustration with the payload constrution. It is rather slower than anticipated. Soldering takes time! Bits and pieces are coming together, but we also lack all the pieces – primarily due to stores. Alex bought some coax cable for the antennae which has yet to arrive, and Jonny’s OLED screen is also missing. Without these, we cannot accurately judge holes to make or how we are going to attach external devices. We did make a slot for the camera cable to pass through, and this works quite well. Hopefully being positioned under the payload the camera shouldn’t be too blinded.

Monday 15/06 : Went back to the Android app that I created weeks ago to input GPS data. First, I redid the Node-RED end, re-creating the database, and adding a suitable flow to receive and process the data, and also for it be queried. Unfortunately, Luke says he has deleted the code that supports the chase car GPS. Hopefully this will be re-written. Modified the app, although much of the structure was in place already, for it to receive and display telemetry data live, and added a flow for PITS telemetry to be pushed to the app. There were a few hiccups regarding formatting, but it not too bad – JSON nodes are required. The idea behind this was for the user in the chase car to have an easy visual way to check the payload’s GPS (only time and coordinates were forwarded.) The display was simply a ListView of the incoming sentences. After this headed into college to continue construction of the payload (as this hadn’t been started properly yet…). This involved quite a bit of swapping stuff around, trying to think of the best way to fit it into the box, and so on. Also ordered a new LoRa board to replace the broken one, which will hopefully arrive soon!

Sunday 14/06 : Working on a couple of different areas today.

As we have unrestricted access to the Hursley lab on the weekends, I took the chance to experiment with the rotating floor! It appears the floor has three controls – clockwise, anticlockwise, and stop. There is no fine control, eg degrees to move. The floor has a 90 degree arc of movement, from 270-360. I timed the rotation, and it appears to move the whole arc in 20 seconds, so 4.5 degrees a second. Using this estimation, I set up an initialisation flow, and a flow to move the table with live data (which hasn’t been tested with real data, only with injections). The method is to time between when messages are sent to the movement and the stop nodes. For instance, to initialise, a clockwise message is sent, followed by a delay of 20 seconds to allow it to get to 360. Then an anti-clockwise message is given, with a wait of 10 seconds, before a stop message is sent. The plan is to have this 315 degree angle as our ‘zero-point’, equal to the direction that the balloon moves initially, perhaps the wind direction at launch time. We can then concern ourselves with just the change of direction, and can represent a change of 45 degrees either way. The second flow stores the previous direction reading, and compares this to a new direction value. If the difference is small, it is ignored. If it is large, a message is sent to clockwise/anticlockwise respective the change in direction, and a stop message sent a suitable number of seconds after, calculated by the change / 4.5, as mentioned earlier. The limit of 1 anticlockwise and 1 clockwise message a minute, although understandable, is frustrating as it means the floor won’t feel particularly ‘live’. On the other hand, I don’t really expect (hope not!) that the balloon direction will be changing by a considerable amount. Perhaps a different form of telemetry could be used for greater movement, if the restriction were lessened?

I implemented an idea I have had for a while with regards to the app, and developed it further. Initially, I simply hard coded our details into the app so it could connect without any long security token needed. I remembered how much I enjoyed app writing, so I went a bit further. I completely remodelled the phone GPS process, recreating the flow and database. The previous process was based on the UK space of Bluemix, which we left ages ago! It is now stored in the SQL database with a date and timestamp. In addition, I adapted the subscription of the app for our own use. To establish functionality, I simply fed back the incoming data from the phone back to it, via some formatting in Node-RED. This was to test the command version of the IoT node, and the subscription of the app. Additionally I change the way received messages are stored in the app, as it was simply appending it to an ArrayList. With one message a second, it soon meant the newest data was a long scroll down, with the view reset every second – not ideal. I inserted the new received messages at the start and limited the size of the array, which solved the problem very succinctly. This process should be very easily transferred to live telemetry data, simply by switching the values in the IoT input node. This means we have another way of displaying data from the balloon, and is perhaps useful in a chase car if we end up having stationary receivers.

Friday 12/06: The larger screen which Jonny had ordered (20×4 rather than 16×2) was unboxed and worked straight away, a pleasure. I checked this worked fine for displaying tweets, which will either be sent or read from file. Also began to consider construction of the payload. After showing Vic, a technician in our department, our plans he suggested slightly recessing the sensors to avoid damage. We made the small recesses, and Constance and I began soldering sensors and internal connections. The incredibly delicate soldering on the sensors themselves Vic kindly did for us, which was really, really impressive work. We are having some concerns about how to mount the camera and screen, but my solution seemed the most feasible with our time. We will hang both the camera and screen below the payload, with the camera angled slightly down and the screen in the top of its field of view.

Having considered payload construction with the camera attached, I took the opportunity to properly inspect how it is being used by the PITS code, and how we can adapt this for us to take videos without interrupting the camera’s use elsewhere. This should be very easy to do.

Thursday 11/06 : Greater understanding and streamlining of the PITS code, editing of existing scripts and flows to accommodate new changes. Spent time following up and placing the order for our helium. Me and Jonny managed to conflict and I deleted one of his flows – only one person should be using Node-RED at a time, regardless of deploying! More time in vain trying to get broken modules to work.

Wednesday 10/06 : Both bad news and good news. We discovered that whilst trying to get the screen to work with the LoRa boards, we managed to break the LoRa board hardware – perhaps due to the ‘USB Power Surge’ the PC providing the power detected. The Operation Mode registers return 0xFF when read, which is impossible if the chip is operating remotely close to normal. This is really problematic. HabSupplies, the only supplier, is closed until the 15th June. So we must request another board and hope it arrives in time!

Good news on two accounts. I managed to configure the screen correctly, by using one of the GPIO pins that the PITS should be using, but for a minor use – an LED light. Adapting and relocating a quick python script, we can now receive and display tweets as planned! It does still need securely attaching/mounting to the header, rather than via jump leads. Second piece of good news is SQL related. Having got feedback from Jonny/Luke’s email to a Bluemix developer, I understood correctly how to use the SQL nodes with queries, so we should be able to allow the front-end people to request data since a certain timestamp reliably.

Jonny and I had another attempt at connecting the IBM Innovation Centre Lab, after failing previously a few times using MQTT/MQLite. Our solution: Twitter! Node-Red is so fast it only took us a few minutes to implement our idea, which worked correctly. Our Bluemix app tweets the data, and the node-red app in the lab detects this and uses it to control the lighting. Of course not optimum or ideal, it was nice to see something working without hitting huge hurdles. After going home, we got an email from Dominic telling us he had set up a local MQTT server, and gave us the URL/port to connect to from our app. This worked wonderfully, and is perfect for our needs. The twitter experiment proved to be an interesting and powerful demonstration of Bluemix’s capabilities, but not needed.

Tuesday 9/06 : Day in college, mainly spent trying to make the screen work, by investigating pin configuration and uses. Seems rather tricky, as the LoRa board has no schematic that we can find, and the header/modules cover many of the connections so we cannot be exactly sure what uses what.

We are having problems in the ‘middle-ware’ stage. The SQL nodes in Bluemix don’t seem to function correctly. Once a query is made, until the app is redeployed it cannot be changed; you can send the node a different query and it will return the result for the original query. We also couldn’t manage to work out how to use the ‘?’ within a query to allow parameters to be passed to the database.

Monday 8/06 : Spent a lot of time sorting out Norway, but also managed to work out where to source helium for our balloon by ringing a few people. Adi had been trying to contact via email departments/a student society, but was having a hard time getting a response. The company BOC has a depot on campus and can supply us via our department, but I was rather shocked at the quote – £150! Constance says the screen doesn’t work in conjunction with the PITS and LoRa boards, so that will need adapting or changing. Researched the possibility of using the DSI (Display) adapter on the Pi, but was rather disappointed and puzzled to find that drivers STILL haven’t been released for this adapter, so there are no commercially available screens. This adapter has been used since the first model of the Pi, so I was rather perplexed this hasn’t been used yet.

Sunday 6/06 : Spent a while bringing projects together. Incorporated LoRa code into the PITS, replacing the supplied LoRa thread with my own. This takes quite a bit of time, due to the large number of conflicts and dependencies involved, made both better and worse by my attempts to isolate LoRa code to make it stand alone. Spent a veryyy long time debugging what I thought was an error in setting the variables for LoRa configuration, ending up looking at memory locations of variables etc to solve a random error. It turns out LogMessage, which I had adapted to either print to the window or to the console, was causing the error. Never having used variable arguments in C before, I had made a mistake. My verbose debugging would print text correctly, but have erroneous values for any variables I passed it. Rather tricky for my naive approach to debugging to solve.

Also implemented an idea Jon gave us on Saturday, with an alteration Jonny gave. The skypi could request a tweet to display rather than being sent them, which makes scheduling a lot easier. If the link is broken, ie the request has no reply, the skypi will read a pre-saved tweet from a file and display this. This worked correctly first time, which was a bit of a guilty pleasure.

Saturday 5/06 : Began to concern myself with handling of the data; LoRa module 1 creates and sends data, LoRa module 2 receives and forwards data to Bluemix. Knowing more about the sentence structure yesterday, could create specific SQL tables to store our data – Cloudant required learning a lot of new content, and tutorials were not particularly Node-Red friendly. Had a rather frustrating unfixable error. Jonny had created a table with FLOATs but the type was displayed to me as DOUBLE. Upon copying his database specification, I couldn’t get my data to be saved, errors were thrown instead. After an hour or of trying various things, Jonny spotted this and helped me out. Also considered more aspects of how the front end people could access the data, considering the issues Luke told us he was having with cross-domain requests.

Thursday 4/06 : The balloon arrived! Unboxed with Jonny/Constance and made short video of it. Constance managed to get the sensors working on our Pis, which is great (as opposed to our older model test Pi), so we completed another link in the chain. I reworked how LoRa created a sentence to incorporate the readings of the sensors, and managed to get them sending to Bluemix with another Python script. After trying and failing to get the C client to work, I had to unfortunately resort to Python. Not what I would ideally like, but very fast to get results. Seeing as the LoRa board already logged received telemetry data to file, it didn’t take long to write a script to read this and publish it to the IOT service. Another Python script running in the background..

Wed 3/06 : Yet more work on LoRa, a seemingly never ending task, focussing on neatening and streamlining all the code. I believe it runs much more elegantly now, and I have reworked the command line ack algorithm to work a lot better – it constantly receives whilst waiting for an ack, rather than dropping any intervening packets and not noticing the ack. Still more to be done – it’s hard choosing a level of re writing completely, whilst still being compatible with the PITS software and being available to others in the future.

I have tried to incorporate MQTT with a C client into our LoRa code, but cannot seem to get it to work – some restriction on IBM IOT foundation perhaps? The C library all appears to work fine, a connection is made so authorisation is not an issue, but then for some reason the connection is dropped when I attempt to publish.

Spent some time trying to access the controls for the Innovation Centre is Hursley, but without success – frustration with a hint of pleasure using packet sniffers etc. Found out the next day the server at their end had updated and not restarted!

Tuesday 2/06 : No internet = no work!

Monday 1/06 : Could finally connect Pis to the network in college. Spent some time understanding the sensor code that Alex/Constance have written, and recording another podcast in the awesome IC radio room.

Sat/Sun 30-31/05 : Weekend wedding! Have made some progress adapting, managed to send and receive data again, and created my own (very poor atm) command line messaging service, which has a number of tries to send, and waits for an ack message. Have discovered a small bug, in that somehow the setup configuration is wrong in my files. It receives as expected, but only if ./gateway is run first. I copy and paste the setup code from gateway.c, and my executable exits early, seemingly at a point where registers are being written and no easily visible cause. Puzzling.

Friday 29/05 : Began to adapt the code in the github to our own needs, replicating/changing functions, and trying to make it more presentable and usable for others. Messy work at the minute. This is compounded by the fact that the given code is not very consistent – some functions with the same name have different content in the gateway files (groundpi) from the pits files (skypi).

Thursday 28/05 : Busy Wed/Thurs helping move house, but in the evening, much to my annoyance/relief, I found code on a different branch of the PITS Git repo for sending via LoRa. I feel this wasn’t particularly well advertised, as neither me or Jonny had noticed/found this. Whilst this was good to find, it did mean my previous week’s work was essentially wasted. Although I have learnt a huge amount about the implementation of the chip and did manage to replicate much of the necessary code myself, the result already existed!.

Mon/Tue 25-26/05 : More work with LoRa code, including meeting up in college with Jonny to explain it

Sunday 24/05 : Having got the hardware, managed to make huge progress with LoRa. Have written code to send data, adapted lora_gateway to receive it, and have neatened up the source code and headers in separate files to make it more of a usable API. Enable command line instructions to be sent and executed, with an acknowledgement system.

Friday 22/05 : Further work on API. Much more understanding of the whole process, and have written function, but hard to test without hardware. Jonny got the SSDV working from the PITS to upload photos which is promising, and he took over developing the script I started to publish telemetry data,

Thursday 21/05 : Small bit of work on wordpress, creating member pages. Began to look into and develop code for the LoRa modules on the Pi. Essentially a lot of reading of both the module datasheet and the code in pits/lora-gateway. Potential API?

Wednesday 20/05 : Worked with Luke on the correct method of data sharing between our databases and the website. The quick HTTP request/response I made wasn’t sufficient as it didn’t work on the website (cross-domain request error), so we had to encapsulate the data. Enjoyably the solution needed both my knowledge of NodeRED and Luke’s knowledge of JSON/JavaScript to get a nice solution. With Jonny, as well as enjoying our huge aerials which have arrived, we worked more on the communication. After having issues with SDR# and fldigi on my laptop (INSTALL, don’t run from a USB) for a while, we finally managed to get decoding on a mobile station (laptop), with the FUNCUBE dongle and the big aerials. Using the script I wrote the day before (with minor changes) we managed to get data, for the first time, all the way from sensors to the database in Bluemix, real-time. Achievement 🙂

Tuesday 19/05 : Working at home, edited and uploaded the podcast according to discussion with Adi day before. Attempted to solve the problem of decoding as follows.

We had software set up correctly that could receive and decode the signal from the PITS. However, as most HABers aren’t interested in capturing the data and using it live, the ‘dl’ version of the fldigi software directly uploads the data to habitat.habhub.org, and doesn’t have a neat or effective logging system. My attempted solutions, which I spent all the day (and in fact previous days) working on:

1) Edit the source code of dl-fldigi, available on github, so that when data is sent to habhub it is duplicated and sent to bluemix.
2) Within dl-fldigi, change the URL that the data is sent to, and set up connections in Bluemix to receive data, save it, and forward it to habhub.
3) Capture the outgoing TCP packets, and send the data to Bluemix with a script.
4) Something else!

And issues:

1) To my mind both by far the most elegant solution, and most effective. However, after trawling through the source code I had no lead on where to start. With more time, this would definitely be the solution I would attempt. However, with only 2 weeks to go not the best plan.
2) Investigated the available option on Bluemix, but without success. After failing myself, I searched through forums and it seems Bluemix does not allow external non-http TCP access. However, I do not know if fldigi is dependant on a correct response from habhub, as well as knowing that is does require certain code and the database structure that is running on habhub. I believe it is doable, but is certainly not a good approach. Perhaps Bluemix needs to support external TCP connections? Of course there is a security risk etc, but for integration with external services I imagine this could be very useful.
3) Same issues as above with connecting to Bluemix.
4) My solution! I downloaded and had a look at the original fldigi software, rather than the distributed ‘dl’ version which has been written and designed for HABers. This had a much nicer logging system, with all the decoded text regularly written to a structured file. I do not know why this was taken out when it was adapted for ‘dl’. Using this software, I was able to write a Python script that reads this file, extracts the decoded payload, and then publishes it to an MQTT topic. Not the prettiest solution, but is (hopefully) sufficient for our project in our very small time frame. The script still needs work, as it is very basic and has no error/checksum checking.

Monday 18/05 : Spent an hour in the IC Radio studio (very swish) recording for our podcast, both fun and hopefully productive! After this, a few more hours talking to the group, and still feeling frustrated with college and the Pis – the tool found to give us the IP address from the MAC address appears to only work in a different department, and using any software that needs installing on the PC (for decoding radio signals) can’t be done.

Sunday 17/05 : Spent a long time creating this page, and writing all the text below! Spent some time researching how to record the data that Dl-flgigi decodes and sends to the Habhub servers, and it looks like we may need to edit source code or a program which intercepts the data.

Saturday 16/05 : Another fun and productive day. The day started with working on more pages for the blog, Tutorials. This involved writing up what I had done when installing the OS on the cards, installing the PITS software, and the LoRa gateway. I also set up a timeline.js timeline, hopefully to be embedded on our Bluemix website as there are issues with WordPress and embedding. Then, having obtained the aerials and camera from college Friday evening, I attempted to transmit and detect information from our PITS board. After setting up the hardware, and creating a connection with the Pi so I could turn tracking on and off, I worked on the detection. I downloaded and installed SDR#, a program to process the signal from the USB dongle, and Dl-fldigi, which decodes the audio signal from SDR# into text. The driver for the USB dongle needed changing with Zadig, a utility of SDR#. After configuration, it was a matter of homing in on the signal with SDR#, which wasn’t too difficult. However, dl-fldigi is a rather bewildering program when first booted, and I couldn’t find a particularly useful tutorial anyway. Stuck, I changed tack and worked elsewhere.

IBM Starter is a ready made app to connect Android devices to IBM IoT, essentially a nice front for an MQTT client. I downloaded the project from github, built it, and installed it on my phone. Next, I registered the device on the IoT dashboard in Bluemix, and set up 2 new flows with NodeRED to handle the data. ‘Phone GPS’ has an IoT input node, collecting the data the phone via an MQTT broker. Essentially subscribed to the topic the phone is publishing to. A function node processes this data to extract just the GPS and drop the accelerometer data. This is then saved in a Cloudant database, different to but supplied by the same service as that for the telemetry data. ‘ReadPhoneGPSDatabase’ is similar to to that of the telemetry data, by responding to a HTTP request with the GPS data.  After this pleasing success (IBM software here was very good in this respect, worked first time no problem), I moved back to the receiving I had been stuck on earlier.

Another hour or two of annoyance at not being able to get dl-fldigi to work followed. I found that the configuration stated the bits per character and stop bits slightly different to what I expected. I then began to delve into the PITS software to see what format it was transmitting; not the best way. Finally, at around midnight, I struck upon the solution! The correct configuration was specific on the Pi-In-The-Sky website, and dl-fldigi worked beautifully. We were receiving data from our Pis!  The solution reminded me of my A-Level maths teacher. His advice was RTFQ, ‘Read The F******G Question!’. In my message to Jonny, I said I had to learn to RTFW, ‘Read The F*****G Website!’, as really the PITS website should have been the first point to look!

Friday 15/05 : Rather inefficient day in project terms, spent a lot of time organising Norway. Processed the video of Thursday afternoon.

Thursday 14/05 :  I fail to set up my own network with a router in college, due to a variety of issues (some my own fault!); a laptop which I couldn’t connect to college WiFi, a Pi which had broken Ethernet support unknown to me, and my own slowness. Jonny arrives, and we work for a while, considering different methods and approaches we could use. Find the very useful tool that will identify the IP address of any Pi attached to the college network, meaning we didn’t need to worry about WiFi dongles or have issues with SSHing in.

On Thursday afternoon, things start to get fun and I make headway. I take lots of the hardware home, and work in my much preferred environment. Unfortunately, as said one of the SD cars wouldn’t allow Ethernet connections, and I couldn’t find a micro SD card adapter at my girlfriend’s house having left the group’s in college. Nevertheless, I worked on Bluemix, the Pi to be stationed on the ground, and my own Pi for testing and experimentation. Lots of the work I did is that contained in the tutorials – installing software! I didn’t start work on what I think will be the hardest bit of the project, the long range communication, both due to its complexity and lack of aerials (coming soon!). I did work on my own MQTT publisher, so we could send custom messages. This was surprisingly easy to do, by downloading Eclipse Paho and writing a Python Script to connect to the IBM IoT broker. I added another flow via NodeRED, allowing the database to be queried via a HTTP request. At the moment this returns all the data, but by talking to Luke/Adi we can decide on what we want the request to return, as this will come from the website they are building.  In the midst of this, I checked over the blog that Jonny had written about our time in Hursley and published that.

Wednesday 13/05 : Some intermittent work from me, as I was preparing for a Spanish exam on Thursday. Needless to say, NodeRED is far more fun than revision and I was still playing around, making a flow or two. On the my own Pi I set up IBM’s IoT service, so it sent some data from default sensors to an IBM MQTT broker in Quickstart mode. I then connected this broker to NodeRED with an IoT node, and with a Limit, Template, and Tweet node managed to produce a tweet once a minute to show that the Pi is connected. However, it would appear Twitter refuses duplicate tweets in a certain time (I’ve never used Twitter before). Needs more work! I also set up a very simple Cloudant database where the data from the Pi is stored, using just an IoT and Cloudant node, and setting up the database in the Cloudant service. Having only had knowledge with an SQL database before, this was remarkably easy to set up! However, I’m not sure yet as to how effective the search method is. Again, more work is needed.

Tuesday 12/05 : Our Hack Day in Hursley. Extremely interesting and insightful, I came away with many ideas, and the knowledge of how to approach a solution. Very worthwhile. After arriving back home, begin to play around with Bluemix, specifically the wonderful NodeRED flow editor. As often when you first use a service, this mainly consisted of blundering around, creating and deleting apps, and seeing what was what. I published the pages for the wordpress sight I had made previously – Project Overview, Members, Hardware, Contact Us.

Friday 8/05 – Monday 11/05 : Surprise holiday for me!

Thursday 7/05 : Enthused by Steve’s talk the day before, I was keen to start work. However, this led to an hour or two of frustration after realising my laptop had died, and none of the college PCs in my department has SD card slots! Constance finally managed to install an image on the cards, but by then it was time to leave – I had a busy next few days.