Towards the Next Generation Road Survey

Over the past few weeks, I’ve managed to escape the office and get back to the field. With an impending change, it’s been a very refreshing time to get back into the mix – especially out onto the roads of Zanzibar.

Alongside work with scaling out Ramani Huria and working with (awesome!) colleagues on the signing of an Memorandum of Understanding between Ardhi University, World Bank, and DfID to support the development of curriculum (with ITC Twente) and the sustainability of community mapping in Tanzania for the next five years. I’ve been working on a side project to look at how machine learning can be used to assess road quality.

To do this, the N/LAB team at University of Nottingham and Spatial Info (the spin out of my team that helped build Ramani Huria/Tanzania Open Data Initiative) and I are working with the Zanzibari Department of Roads, under the Ministry of Infrastructure, Communications and Transport to survey all roads in Ugunja Island Zanzibar.

The Department of Roads & Uni Nottingham Team

So far, we’ve worked on getting a surveying vehicle back on the road, initial back and forth with government stakeholders, and working on pulling together the various road data sources (such those from the Government and OpenStreetMap) to work out where to drive and the sequencing of the survey. All of this will support a data collection protocol that merges traditional surveying techniques, with novel ones such as RoadLab Pro.

All of these data streams will then be used as a training dataset to see how machine learning can inform on road quality. But first, we’re getting the traditional survey underway. It’s going to be long road ahead – as long as all the roads in Zanzibar!

Watch this space, the project’s Medium page, and the N/LAB’s blog on using machine learning for automated feature detection from imagery. Get in contact below in the comments as well.

Written in the Al-Minaar Hotel, Stone Town, Zanzibar (-6.16349,39.18690)

OSM: Going Back in Time

I’ve been playing around with the full planet file to look at going back in time in OSM. Mainly, this is to look at how Ramani Huria’s data has evolved over time and is all part of extracting more value from Ramani Huria’s data.

I’ve been playing around with the full planet file to look at going back in time in OSM. Mainly, this is to look at how Ramani Huria’s data has evolved over time and is all part of extracting more value from Ramani Huria’s data. This process wasn’t as straightforward as I had hoped, but eventually got there – also, this isn’t to say that this is the only or best way. It’s the one that worked for me!

To do this, you’ll need a pretty hefty machine – I’ve used a Lenovo x230 Intel i5 quad core 2.6ghz, 16gb of ram with over 500gb of free space – This is to deal with the large size of the files that you’ll be downloading. This is all running on Ubuntu 16.04.

Firstly, download the OSM Full History file. I used the uGet download manager to deal with the 10 hour download of a 60gb+ file over 10meg UK broadband connection. Leaving it overnight, I had a full file downloaded and ready for use. Now to set up the machine environment.

The stack is a combination of OSMIUM and OSMconvert. On paper, the OSMIUM tool should be the only tool needed. However, for reasons that I’ll come to, it didn’t work, so I found a workaround.

OSMconvert is easily installed:

sudo apt-get install osmctools

This installs OSMconvert other useful OSM manipulation tools. Installing OSMIUM is slightly more complicated and needs to be done through compiling by source.

Firstly, install LibOSMIUM – I found not installing the header files meant that compilation of OSMIUM proper would fail. Then use the OSMIUM docs to install OSMIUM. While there is a package included in Ubuntu for OSMIUM, it’s of a previous version which doesn’t allow the splitting of data by a timeframe. Now things should be set up and ready for pulling data out.

Dar es Salaam being the city of interest, has the bounding box (38.9813,-7.2,39.65,-6.45) – you’d replace these with the South West, North West point coordinates of your place of interest, and use OSMconvert, in the form:

$ osmcovert history_filename bounding_box o=output_filename

osmconvert history-170206.osm.pbf -b=38.9813,-7.2,39.65,-6.45 -o=clipped_dar_history-170206.pbf

This clips the full history file to that bounding box. It will take a bit of time. Now we can use OSMIUM to pull out the data from a date of our choice in the form:

$ osmium time-filter clipped_history_filename timestamp -o output_filename

osmium time-filter clipped_dar_history-170206.pbf 2011-09-06T00:00:00Z -o clipped_dar_history-170206-06092011.pbf 

This gives a nicely formatted .pbf file that can be used in QGIS (drag and drop), POSTGIS or anything else. As the contrast below illuminates!

tandale_01082011
Tandale, Dar es Salaam, Tanzania – 1st August 2011
tandale_2017_lowres
Tandale, Dar es Salaam, Tanzania – 13th February 2017

Enjoy travelling back in time!

All map data © OpenStreetMap contributors.

Building Heights in Dar es Salaam

I first went to Dar es Salaam in 2011, there were a few skyscrapers adorning the city’s skyline, now they’re everywhere! Sitting on a rooftop bar in the center of the city, it’s a mass of cranes and pristine new buildings.

Alongside this rapid growth, Ramani Huria has been collecting a lot of data but a lot of it doesn’t get rendered by the default OSM styles… so I’ve dug into the data and created a map of the different floors across the city.

This interactive map allows you to explore where the tallest buildings are in the city, but in displaying the data in this way, also allows for the densest, unplanned and informal areas of the city to become very clear.

There is still some way to go though – in Dar es Salaam there are around 750,000 buildings, with roughly 220,000 (~30%) having been surveyed by the Ramani Huria team and given an appropriate attribute. Ramani Huria has focused its efforts in the urban centres of Dar es Salaam, where most of the multi-story buildings are to be found. But, still a lot more to be covered towards Bagomoyo and Morogoro.

Hat tip to Harry Wood who’s advice and guidance pointed me in the right direction – a more technical blog post and more details of other challenges around correctness of tagging but that’s for another post – now to look at Floor Spaces Indices…!

GDAL Nodata and Drone Processing Pipeline

Due to a nodata tag issue within the GeoTiff standard, Pix4D doesn’t generate the nodata value in a GDAL compliant manner. As such, when importing GeoTiffs into software using GDAL (like QGIS or Mapbox) the nodata value is black, meaning overlapping layer issues. Additionally, for an unknown reason, Mapbox does not deal with multiple GeoTiffs in the same manner as it displays .mbtiles, pixellating uploaded GeoTiffs. This short blog presents a processing pipeline to deal with both of these issues.

The solution for translating the nodata values has been documented here: https://geozoneblog.wordpress.com/2013/02/26/troubles-tiff-no-data-values-floating-point. I’ve written a simple bash script to convert this in a UNIX environment is here: https://github.com/WorldBank-Transport/TanzaniaDrones/blob/master/transparency_batch.sh. Basically, run the script in the folder where files are found and it will iterate through each file, translating the nodata tag value to be GDAL compliant.

Once you’ve got the nodata/transparency values resolved, it’s quite easy to add your Geotiffs to Mapbox’s Tilemill. Export your resulting map as .mbtiles and upload this to Mapbox as a tiles. In the Mapbox Studio Classic interface, add this to a map and voilà, a drone imagery online.

Starting Ramani Huria – Mapping The Flood Prone Areas In Dar es Salaam

Four years ago, in August 2011 I was fortunate to manage the community mapping of Tandale. It was an experience that irrevocably changed my professional direction and interests. Over a month I trained and worked alongside brilliant students and community members, who were all focused on getting an open map of Tandale, something that had never been accomplished previously. When it was done, the reception across civil society and government was positive and intentions on scaling the pilot to the city were mooted but for one reason or another it never quite made it. Then in December, floods hit the city. In dense informal urban environments such as Tandale these floods are fatal and dramatically change the landscape as well as causing mass damage to survivor’s livelihoods and assets. Mitigating these floods are hard – where do you start in the fastest growing city in Africa? The population as of the 2012 census currently stands of 5 million, with projections showing it could grow to 10 million by 2030.

This rapid and unplanned urbanisation is in part the cause of flooding: the infrastructure with which to cope with high rainfall, such as drains and culverts, were not built alongside residential dwellings. This is especially acute in the unplanned, informal urban settlements where a majority of Dar es Salaam’s residents reside. The theory here is quite simple: If that if you can identify where it floods, you can either install or upgrade infrastructure to ameliorate the situation for residents. Unpacking this, the crux of the issue falls to two main points, governance and data.

Ramani Huria – Swahili for “Open Mapping” – is a operationalization of this theory of change. In March 2015, a coalition from across Tanzanian society, composed of the City Council of Dar es Salaam, the Tanzanian Commission for Science and Technology (COSTECH – under the Ministry of Science, Communication and Technology), the University of Dar es Salaam, Ardhi University, Buni Innovation Hub supported by the Red Cross and World Bank supported the inception of Ramani Huria, with the goal of mapping flood prone areas in Dar es Salaam, making this data openly available and supporting the use of this data into government where decisions can be made to mitigate flooding.

Mapping Phases
Mapping Phases

It is a far cry from 2011 where just mapping the ward of Tandale was a large task. Ramani Huria consists of a pilot phase and four subsequent phases. To pilot, the wards Ndugumbi, Tandale and Mchikichini, with a combined population of over 100,000 residents were mapped in series. This process combined 15 students matched with community members, leading to maps of all features within that community. This information, focusing on drainage and water ways, is critically needed to help understand and locate flood prone areas; this is high priority in Dar es Salaam due to the damage that annual floods wreak upon the city and its residents. In this piloting phase, conducted from March to the end of June these three wards were mapped, in part to generate the data that will generate flood inundation models and exposure layers but also to pilot the data model and gel the team, prior to Phase One.

Scale Up Workshop
Scale Up Workshop – https://www.facebook.com/ramanihuria

Phase one on paper is quite simple. Take 150 students from the University of Dar es Salaam’s Department of Geography and Ardhi University’s School of Urban and Regional Planning on industrial training, hold an inception workshop, deploy this contingent across six wards and work with community members to replicate the pilots, but running in parallel. At the time of writing, mapping is ongoing in six communities: Msasani, Keko, Makumbusho, Mabibo, Makurumla and Mburahati. According to the 2012 NBS census, these wards have a combined population of over 280,000 residents. Phase one was kicked off on the 6th of July and will run until the 14th of August.

Field Survey - https://www.facebook.com/ramanihuria
Field Survey – https://www.facebook.com/ramanihuria

Phases Two and Three, will integrate community volunteers from the Red Cross, these volunteers are committed to creating community level resilience plans. These plans will use the data produced by the mapping to create resident evacuation routes and aid Ward Exective Officers with planning decisions among many other uses. Additionally, with embedded long term volunteers monitoring change in their wards, this will hopefully result in detailed up-to-date maps in rapidly changing urban areas.

InaSAFE Training - https://www.facebook.com/ramanihuria
InaSAFE Training – https://www.facebook.com/ramanihuria

Phase Four unfortunately sees the students depart from the project, due to their graduation. With a remaining contingent of around 30 mappers, mapping will continue until February 2016. These phases cover the data component, consequently alongside these phases are dedicated training events aimed at building capacity to use and deploy this data in real world situations. On the 20th July the first such workshop series took place, with representatives from the Prime Minister’s Office for Disaster Management Department being trained in spatial analysis in QGIS and risk modelling using the QGIS plugin InaSAFE. A series of these workshops will take place, placing the data into the hands of those responsible for the city.

While this is ongoing in Dar es Saalam, you could get involved wherever you are in the world, through the Missing Maps project. Missing Maps is a collaboration between the Red Cross, Doctors Without Borders and Humanitarian OpenStreetMap Team, aimed at digitising “the most vulnerable places in the developing world”, but primarily do so by crowdsourcing the digitisation of aerial imagery. At the moment, there are three tasks for Dar es Salaam:

By helping digitise the buildings and roads, using the recent drone and aerial imagery, the process of mapping is faster, allowing the community mappers to focus on the detail of flood data. Additionally, the data from Ramani Huria is all placed into OpenStreetMap, its code is on Github and content available from Flickr and Facebook, all with an open licence. Please get involved!

 

Written on a plane somewhere between Tanzania and the United Kingdom

GISRUK 2013

On the 3rd to the  5th of April I attended GISRUK (Geospatial Information Research in the United Kingdom) to give a paper on Community Mapping as a Socio-Technical Work Domain. In keeping with Christoph Kinkeldey‘s love of 1990s pop stars Vanilla Ice made a second slide appearance, leveraging the fact it’s a very technical academic title. In short I’m using Cognitive Work Analysis (CWA) to create a structural framework to assess the quality (currently defined by ISO 19113:Geographic Quality Principles – well worth a read…) where there is no comparative dataset.

CWA is used to assess the design space in which a system exists, not the system itself. In taking a holistic view and not enforcing constraints on the system you can understand what components and physical objects you would need to achieve the values of the system and vice-versa. In future iterations I’m going to get past first base and look at decision trees and strategic trees to work out how to establish the quality of volunteered geographic data without a comparative dataset. Building quality analysis into day one, as opposed to being an after thought.

Written and submitted from Home (52.962339,-1.173566)

 

A Manifesto for the OSM Academic Working Group

A fellow member of the OSM Foundation replied to a conversation on the mailing list: “As a guerrilla academic…“. The context was around a suggestion for increased academic cooperation within OSM. To this end I proposed a new working group for the OSMF: Academic Working Group. This would have the aim of improving the academic quality and communication of those using OSM in their research and facilitating collaboration.

Below is the start of the manifesto. It’s not complete, but it’s a start.

Background

Academic institutions use OSM data. Be it part of their published research or testing hypotheses. Some of the publications are listed on the wiki: http://wiki.openstreetmap.org/wiki/Research. However within OSM and OSMF this research is undertaken under the researchers own initiative. Researchers are looking at OSM through recommendation (supervision) or self interest within their own academic structures. Given the growth of OSM and the research into it, it seems likely that academic interest will widen and grow.

Goals

Support academic research in OSM, encouraging best practices and acting as a forum for researchers. This has the aim to support researchers starting out with OSM but also to unify a community of existing researchers; collaborations and knowledge sharing will hopefully follow. Identification of areas of research for the community as a whole among potential themes of usability and business models (as a starting point).

Tasks

  • Uniting existing researchers, either at existing institutions or those following independent academic study.
  • Provide documentation (a la learnOSM) but focused for researchers.
  • Provide a forum for researchers to discuss their research and bridge into the community
  • Support and provide problems to the academic corpus.
  • Communicate potential collaborations, needs, wants.
  • More TBD

Working Group vs. Community

I think this is hitting a gap that exists in the community currently. I don’t see potential areas for conflict. However that being said do we have enough members within the OSM(F?) to create and steer the working group?

WWWG vs. other WGs

There is a small amount of overlap in interest between this proposed AWG and other Working Groups.  I can see potential overlap with communications and strategic working group. Communications as this would aim to focus on building up the OSM academic community. Strategic as they may wish to commission studies or at least support them, into critical areas of OSM.

Next steps 

Again, I’ll throw this to the OSMF. Where should we go from here?

Written and submitted from the London St. Pancras to Nottingham Train.

Community -> Hackathons -> Software Shipped -> Repeat?

Taarifa inception as hackathon project makes it special through it’s continued existence. From speaking to peers it seems that many hackathon projects are to create ideas not code. Fortunately we’ve stuck together, widened the community and now have members of the Taarifa community working on it for their dissertations, myself with my job within the World Bank deploying an instance of it to Uganda.

Within our own community we’ve got developers across the world working on little bug fixes and functionality. One member has taken on improving mobile UI upon himself and now the mobile web app looks brilliant on a mobile phone screen, decluttered from unnecessary components with things like inline labels ensuring things are ‘better’ on a phone. This is building upon a hackathon held at Shoreditch Design’s office where we shipped Taarifa 1.0.

With the forthcoming Uganda deployment (and potentially other applications) we have the opportunity to test and refine certain components. However building capacity in places where this is deployed is important. We hope by demonstrating Taarifa in developing innovation centres we can seed the ideas and principles of Taarifa (and by extension open source software) with them joining our growing community. This in turn would provide different perspectives on the requirements and ultimately filter down into software releases improving it.

The community is currently in the process of writing a Taarifa API consisting of entirely new code. We’ve just deployed a Q&A site on help.taarifa.org making it easier for users of Taarifa to ask questions and be a knowledge base for future users, avoiding the ‘goto github’ mentality which – in my opinion – should be for developers. Currently Nico and I are working on screencasts to give short introductions into installing and using the Taarifa system. Onwards!

Written and submitted from the Leicester, UK (52.6352614,-1.1378884)

Usability Of OSM’s ‘Toolkit’ In Community Mapping

This blog post isn’t a formal evaluation of the usability of OSM’s software or the equipment used for mapping. It is not meant to attack particular software; The software and implementation of OSM deserves many medals with equal amount of recognition.

This post is about things I noticed while mapping in Tandale, there is no statistical analysis, I have no dependent or independent variables, it’s based mainly around anecdotes and conversations with people. Though this doesn’t exist as a formal ethnography, it could serve for some useful pointers in future.

JOSM

As we had netbooks with a small-ish (11″) screen-size and a trackpad, mice are essential for mappers getting started. In month spent in Tandale the designated editors have become JOSM gods with the majority of students and community members having fair literacy within JOSM’s processes. However when starting, the software was made accessible to the mappers purely through using a mouse. Most of the mappers were familiar with mice, whereas a trackpad was a piece of technology that wasn’t commonly used.

Conflicts commonly occurred within JOSM, in that groups where editing and uploading areas that they had mapped independently. This was difficult to control at first, as we had started with a blank slate, however boundaries of the sub-wards was relatively well known and demarcated by physical boundaries. Regardless groups wandered into areas which weren’t theirs to map. With the division of labour, in that roughly half were mappers undertaking the bulk of the surveying and with the others editing. When conflicts occurred the process was occasionally esoteric, especially if the group in question had been editing for a while.

To counter this I requested that each of the different sub-ward teams follow the mantra of save, upload and download often. Unfortunately this, on many an occasion, fell on deaf ears. This just meant conflicts were a laborious process, how could they be made better? Also JOSM’s autosave feature was a godsend, inevitably something would crash, causing people to start again.

Within the final presentation to the wider community and stakeholders, one of the points raised was incorrect spelling. There is autocomplete in JOSM, however it seems that if a spelling mistake got in first, like ‘Madrasah’ (an Islamic school, with debate on its correct spelling anyway) this would filter down, with the new mappers believing that the system is right. This would start adding clunky bits of software onto something that was never designed for spelling correction, but should plugins be created to improve this?

OSM Tagging

Due to the informal economy within the slums formal medical advice and dispensing is very rare. The community-at-large simply cannot afford ‘professional’ medical care. This has led to ‘dawa’ – medicine – shops dispensing everything from medical advice to prescription medication. Formally defining these structures into OSM is difficult, we could just create custom presets, it’s something done within Map Kibera and Map Mathare.

The issue here is that we are using the same ‘custom’ presets repeatedly. It surely would be better to include the commonly used attributes (common when mapping in environments such as Tandale/Mathare/Kibera) in the JOSM package itself? Is this feasible?

Satellite Image Tracing

One of the experiments that ‘failed’ was the tracing of satellite imagery. Bing were very kind in releasing their imagery to the OSM community to derive data, and our initial idea was to derive building outlines from this imagery. Initially it was perceived that tracing went well, some buildings weren’t quite perpendicular but using JOSM’s built in ‘q’ function fixed this. When map completeness was approaching, validation errors were caught informing that pathways were going through buildings and vice-versa. There are three explanations for this;

  1. The GPS has recorded an inaccurate position, i.e. path through the environment due to the accuracy being imprecise. (Technology Error)
  2. When editing the editor has generalised a GPS position or incorrectly mapped a building. (Human Error)
  3. The imagery is not rectified properly, or some error exists in the processing/the quality of a ‘high’ enough quality with which to derive information. (Human and Technology Errors)

These factors are a combination of human and technical problems, in this case I believe it is a culmination of each of the factors. Some of them, especially with image quality and GPS accuracy would presumably need some sort of best practice to be implemented. Other sources of human error in the editing process are harder problems, especially without a comparable dataset, this is a more open ended problem.

Rendering

When I joined OSM I was a student in a foreign city, with no map with which to explore with. A massively pro open source friend recommended the OSM project. I already had a GPS from my time working at a camping store during summer holidays  so it was a match made in heaven really. My first edit was of the D400 road from Nancy to Lunéville around 2007/8 then I set to work in the area.

The community was very small and so, presumably was the power of the servers; it would take a few days for anything to be rendered on the OSM homepage. Now something uploaded can take anything from five minutes to an hour. The server administrators deserve more recognition in their services, so if you meet them, buy them a drink – they deserve it.

Summary

In summary, I believe that the tools we use in OSM are great, none of what I’ve written is a slant at a particular software or person. I believe that we should however consider certain points about widening access to the software in making it more usable. I also welcome comments below!

Written and submitted from the World Bank Offices, Washington DC (38.899, -77.04256)

Developing Taarifa; London Water Hackathon 2011

On Friday 21st of October I attended my first hackathon. For those unfamiliar with hackathons they’re events with the idea of sticking a load of intelligent and driven computer hackers and programmers in a room for a period of time and see what drops out. The format of the London Hackathon was that for 48 hours at UCL groups would produce technology demonstrators and designs to solve global water problems.

The execution follows an introduction by the organisers framing the exercise with problem statements and presentations by experts wanting to solve different problems, people skyped in or did them in person. The problems presented all came form the Random Hacks of Kindness website and ranged from a water trading platform to my own issue of public service infrastructure and community mapping – my PhD.

Once the ideas where presented it was up to the developers to choose their projects. I had some awesome and – as my friend Josh Goldstein would say – ‘pumped’ developers with skills ranging from PHP to Python. The idea is essentially ‘Fix My Street’/Open311 for less developed countries. From this I started to hoover up whiteboards like they were going out of fashion. On them trying to get a design spec on workflow through the potential system, how triaging of reports would work. We were left with some outstanding design to be done, but nothing too complicated; or so we thought at the time. The idea was to create this system so it can be used as a technology demonstrator to the Tanzanian Commission of Science and Technology (COSTECH).

When I was last in Tanzania, COSTECH was being bounced around by the IT community. In Nairobi, the iHub acts as a lightning rod for technology and IT supporting local and foreign developers. The iHub model is excellent and really drives technological innovation in East Africa, other initiatives include the Hive Colab in Kampala and Bantalabs in Senegal. The people that run the iHub also run Ushahidi. Ushahidi is a platform/CMS designed for the crowdsourced reporting of issues, its inception was due to the Kenyan election crisis of 2008 and since has been used to report on Flamingos to the recent ‘occupy’ movements. Under the bonnet it’s a PHP website developed with the Kohana framework. It was here where we hit our first issue; none of our developers had worked with Kohana before. We split into two groups, one figuring out Kohana, the other designing workflow.

The first issue faced by the newly developed team was developing environment and server access. While most of the developers (myself included) were familiar with the programming languages needed, it wasn’t our primary choice. Personally I haven’t hacked about with PHP since October – December 2005, for others the experience seemed the same. Installing Ushahidi also proved problematic. Issues with mod_rewrite and other PHP extensions were experienced, but were eventually fixed. This wasn’t a problem which Ushahidi per se, though due to some very strange address rewriting configuring mod_rewrite was necessary for all. As we didn’t have server access we remote hosted, using one of the developers personal servers.

Once the workflow was sketched out we presented back to the developers, who had been exploring the Ushahidi plugin ecosystem. We integrated ‘actionable’, ‘simple groups’. Actionable was used to ‘action’ reports, and place them in a triage system.  Simple groups was used to simulate a ‘team’ of fixers. Fixers was used generically as the people fixing the reported problems, however the dynamic of how this would be worked wasn’t considered at this stage. Currently in Tanzania, in Dar Es Salaam and Zanzibar a number of reporting/monitoring projects are in the pipeline. The next steps are to interface with local government, private companies and citizens to develop public services.

We started to have a good interface, with tasks and problems being received and triaged. The workflow started to come together with reports being verified, on verification being triaged, going to the imaginary team of fixers and finally reaching conclusion or dispute resolution. The tabs to accommodate these were integrated into Ushahidi.

Now reports were able to be triaged we focused on expanding the reporting mechanism. Out of the box Ushahidi supports reporting through a web-based form, twitter and through its mobile applications (iOS, Android, Java and Windows Mobile 6). It can interface with SMS gateways like FrontlineSMS, we wanted to use SMS due to the ubiquitous nature of feature phones in Africa that realistically can only use SMS as a form of reporting. Using SMS presented problems of geo-locating the messages. Theoretically it’s possible for the telecos to triangulate the position of the sender and supply a latitude and longitude but this isn’t practical over a 48 hour hackathon notwithstanding the ethical and privacy concerns.

The solution we came up with, kudos to Dirk Goriseen, was to create a 100m2 grid and have a 10 digit reference for each grid square that people could text in. This would be prefixed by a hash (#) then found through a regular expression in the submitted message. Obviously questions remain when implementing this on a large scale namely ensuring local people know what their code is and creating a reference system that conforms to the human geography not just the physical. Integrating the SMS into the system was more problematic. We found that FrontlineSMS on the face of it doesn’t work in the cloud – if that is wrong please correct me! So Caroline Glassberg-Powell (Caz, our PHP guru) phoned a friend of hers, Daniel Fligg to hack an SMS gateway together.

Now it’s about 2100. We’ve been working for roughly nine hours and out project and code is starting to take shape. Teutonic reinforcements then arrived, all of them with Android mobile phones, none of them came with any Android development experience but did come with a willingness to learn about developing the Android application. This process entailed downloading Eclipse, setting up the Android SDK etc. It was a process that I feel should have been simpler, some issues with devices not being recognised, Eclipse’s idiosyncrasies (like interfacing with GitHub) were encountered.

About two hours later, once an Android development environment had been setup on each of the laptops we forked the Ushahidi Android app on GitHub and started development. While the stock app is very good we wanted to add the functionality of being notified when your report changed its status. To do this, avoiding login and passwords we wanted to use a unique identifier. Android, within its SDK supports a unique identifier and has specific calls for it. We later figured out that it’s not implemented across all versions and devices, with some choosing to not make that part of the SDK available. Quite why this is the case, the rationale doesn’t seem clear and from a design aspect it seems quite stupid, but the situation is as it is so we hack!

Our solution was to create a 256bit random hash assigned when the user opens the application for the first time. This would then be included in the reports to identify which reports come from which device (with the potentially flawed logic of one person uses one device/one device is used by one person). So instead of seeing all the reports in the system you just see the progress of reports that you have submitted. Unfortunately this process went on into the early hours, at which point sleep was necessary.

None of us wanted to use the night buses to go home so we slept in the working space. We fashioned little cubicles from the whiteboards, with chairs providing a bed. The sleep came easy, but when the three that remained woke we were all quite cold, in need of some more sleep and shower. However that doesn’t detract from more hacking!

On waking up the majority of the web-based system was already completed. During the night the SMS gateway had been magically integrated during the night. What remained was obvious bug-fixing, tidying up code and starting documentation.

The Android app had started to encounter real difficulties,  it wasn’t compiling, even when reverting back to code which was known to be ‘fine’. This process continued from 0930 – 1400. The issues with Android turned out to be problems within the Eclipse development environment. The code compiled and ran – with our adjustments – for the first time during the presentation. It was good timing.

We presented the slides above to the judging committee. We won. The credit to this victory goes to the people involved in the project, because of the numbers I guess we had covered the most amount of ground. However this isn’t trying to detract from the Herculean efforts of the programming team and other participants in the hackathon. Participation in our project wasn’t strict, people flitted in and out on the basis of their skills, in all about 4/5 people could be considered to be core/’never say die, I’m here till the end’ members. But the entire project had around 10-12 people that scribbled on the board, gave advice or rolled up their sleeves and hacked a little. This blog is dedicated to all of them.

Photos from the event can be found on Flickr here: http://www.flickr.com/photos/markiliffe/sets/72157628083354348/

Written and submitted from the Nottingham Geospatial Building (52.953, -1.18405)