On the 3rd to the 5th of April I attended GISRUK (Geospatial Information Research in the United Kingdom) to give a paper on Community Mapping as a Socio-Technical Work Domain. In keeping with Christoph Kinkeldey‘s love of 1990s pop stars Vanilla Ice made a second slide appearance, leveraging the fact it’s a very technical academic title. In short I’m using Cognitive Work Analysis (CWA) to create a structural framework to assess the quality (currently defined by ISO 19113:Geographic Quality Principles – well worth a read…) where there is no comparative dataset.
CWA is used to assess the design space in which a system exists, not the system itself. In taking a holistic view and not enforcing constraints on the system you can understand what components and physical objects you would need to achieve the values of the system and vice-versa. In future iterations I’m going to get past first base and look at decision trees and strategic trees to work out how to establish the quality of volunteered geographic data without a comparative dataset. Building quality analysis into day one, as opposed to being an after thought.
Written and submitted from Home (52.962339,-1.173566)
A fellow member of the OSM Foundation replied to a conversation on the mailing list: “As a guerrilla academic…“. The context was around a suggestion for increased academic cooperation within OSM. To this end I proposed a new working group for the OSMF: Academic Working Group. This would have the aim of improving the academic quality and communication of those using OSM in their research and facilitating collaboration.
Below is the start of the manifesto. It’s not complete, but it’s a start.
Academic institutions use OSM data. Be it part of their published research or testing hypotheses. Some of the publications are listed on the wiki: http://wiki.openstreetmap.org/wiki/Research. However within OSM and OSMF this research is undertaken under the researchers own initiative. Researchers are looking at OSM through recommendation (supervision) or self interest within their own academic structures. Given the growth of OSM and the research into it, it seems likely that academic interest will widen and grow.
Support academic research in OSM, encouraging best practices and acting as a forum for researchers. This has the aim to support researchers starting out with OSM but also to unify a community of existing researchers; collaborations and knowledge sharing will hopefully follow. Identification of areas of research for the community as a whole among potential themes of usability and business models (as a starting point).
- Uniting existing researchers, either at existing institutions or those following independent academic study.
- Provide documentation (a la learnOSM) but focused for researchers.
- Provide a forum for researchers to discuss their research and bridge into the community
- Support and provide problems to the academic corpus.
- Communicate potential collaborations, needs, wants.
- More TBD
Working Group vs. Community
I think this is hitting a gap that exists in the community currently. I don’t see potential areas for conflict. However that being said do we have enough members within the OSM(F?) to create and steer the working group?
WWWG vs. other WGs
There is a small amount of overlap in interest between this proposed AWG and other Working Groups. I can see potential overlap with communications and strategic working group. Communications as this would aim to focus on building up the OSM academic community. Strategic as they may wish to commission studies or at least support them, into critical areas of OSM.
Again, I’ll throw this to the OSMF. Where should we go from here?
Written and submitted from the London St. Pancras to Nottingham Train.
The digital economy program encompasses five universities (Nottingham, Cambridge, Reading, Exeter and Brunel) with numerous Doctoral Training Centres (DTC) training ‘the next generation of researchers’. I’m quite fortunate to be at the Nottingham Digital Economy DTC, which is rare in that it has a combined research hub and DTC. From time to time the hubs and DTCs get together in conference, where the collective research efforts and outputs is demonstrated. However every so often the research council – i.e. the people that write the checks – wish to see the results of their labours.
Due to the nature of the PhD programs – in a cohort, as opposed to individual working – they wished to understand value for money, more specifically the research impact that the programmes have had. The format for this was a poster/live demonstration of gadgets, followed by an interview session with some direct, searching questions. It quickly became apparent the role of the DTC staff was multifaceted, focusing not just on research and supervision but deftly dealing with the work associated with the DTC. Having a window into this world offered a very different perspective on the process of research councils, from the council’s expectations to the reality of research output and the seemingly intangible process of ascertaining ‘impact’ – Impact seemingly being if you’ve done something useful and interesting.
Meeting the other DTCs with the twist of funders and assessors was a good break from the usual, made special that it only happens every few years. The only downside of the process was the EPRSC (Engineering and Physical Sciences Research Council) was based in tragic town of Swindon. However the bods have pulled a bit of a wheeze by placing the building next to the train station, adding a dedicated footbridge. This means, if day tripping, you don’t physically have to enter the town. Instead you walk in the footbridge (from the set of Threads) direct to the centre, unfortunately without seeing the famous Swindon vistas! All in all a good day all around!
This post I guess has been a long time coming, basically hit it’s zenith and then subsided. About a month ago I had serious doubts within the PhD, around whether I was ‘good enough’ to complete. The majority of doubt focused around completing what I had perceived to be an easy task of implementing a ‘simple’ algorithm. This turned into three weeks of nothing. Breakthrough occured on what was supposedly a three day break in Marseille – before a 12 day conference schedule in Avignon (AGILE) and Amsterdam (WhereCampEU).
It would be fair to say I’d hit the lowest point of the PhD then. It was a sequential thought process, if I can’t do this simple thing, how am I prepared for the harder things later. Doubt set in, and the analysis was concluded in that I should quit. Then the break through came and all was good, confidence restored. I then read ‘The Valley Of Shit‘ a blog about going through the same thing; “Valleys lead to somewhere else - if you can but walk for long enough. Unfortunately the Valley of Shit can feel endless because you are surrounded by towering walls of brown stuff which block your view of the beautiful landscape beyond.”
Anyhow, I feel out of the valley now. All is good.
Written and submitted from Coffee Company, Amsterdam, Netherlands (52.371554,4.896772)
Another hectic few weeks draws to a close. The literature review is more grounded than before nearing 5,000 words and pretty much all of them are ‘good’. The past week has been spent in Wernigerode in the Hars region of Germany on an AGILE (Association of Geographic Information Laboratories Europe) PhD Winter School.
Mixing PhD candidates from research centres across Europe from GIS, Geomatics and other disciplines the event started with the customary ice-breaker. Even over (some very very good) local beer, it was clear that the participants were from very diverse backgrounds, most in the process of doing interdisciplanary research, with projects looking at conflating ontologies, predicting the location(s) of serious criminals, visualising change and crowd sourcing 3D building models among many others.
The first day started with introductions, followed by 10-15 presentations with questions on our topics. Taking all day it was good to see how other geospatial PhDs evolve in differing subjects and countries. During a very German (schnitzel) lunch we wandered in the forest surrounding Wernigerode. Though the place is quite of the beaten track it really is worth a visit if you want to chill out.
The second day started with a very good talk from Bénédicte Bucher from IGN about the differing research groups of the French National Mapping Agency, concluding with her thoughts on the PhD process. She noted that when you first start it’s like being in a Bazaar, you see the different pathways however, eventually, you’ll be forging your own path in the wilderness over tough terrain.
This followed into break out sessions with other participants to start either a paper or initiative on your subjects. Being in the Volunteered Geographic Informations group we went back to basics. Though we were from differing subsections of VGI (Crowd sourcing 3D indoor models, policy of VGI in government and myself in Community Mapping) our common ground was the lack of definitions in VGI, so we proposed an AGILE initiative to fix this.
After putting a 10 minute presentation together (where we managed to get MC Hammer into a slide, under the rather tenuous headline of “Break It Down”) we formally ended the winter school with our proceedings in hand. Then a spot of further networking in the only club in Wernigerode…!
Written and submitted from the DB RegioBahn Magdeburg – Berlin train.
All academic research should be ethical, guidance comes from the research councils and, generally, from individual departments and faculty ethics boards. Unfortunately the responsibly produce ethical research is on the individual researcher, not with boards and faculty. When I took my first year viva, in it contained some analysis to counter some questions on data and what is possible, knowing it would be a consideration.
In it I produced a time series analysis of the 2010 Egyptian elections with the data from U-Shaid an Ushahidi instance. This hadn’t gone to an ethics board, it had only been seen by my supervision team to allay fears on whether data could be found and analysed (The question on how useful the data is for my research is still being answered). On the day the viva went well with a few discussion points on the data and what it means for future analysis – At this juncture, I’d like to point out that this isn’t a mea culpa of saying I do unethical research!
Because I have decided to use an ‘ethnographically informed’ methodology/action research a lot of the research is based upon my experiences and personal narrative. This then is supported by interviews with experts or people working within the domain. This part of the research isn’t controversial, the ethics around interviews and ethnography are well structured, with clear pathways, dos and don’ts. Fixing these ethics seemingly is based around not doing harm and ensuring informed consent; this being where the subject is being made fully aware of the aims and goals of the experiment before participating. This isn’t cast in stone, especially when trust/deception are part of the experiment, but considerations need to be made. Effectively don’t do the Milgram Experiment, or kill anyone.
At first glance it doesn’t appear that any of that has anything to do with my research. As I’m revolving around community mapping, public participatory GIS (PPGIS), volunteered geographic information (VGI) the ethics are a lot more obfuscated. Something as simple as data collection and its nature can be questioned. Primarily I will use Open Street Map as my VGI source, following Muki Haklay’s work on comparing VGI with an authoritative one, to potentially answer a question around data quality in slum mapping. Informed consent here for mapping parties or the mapping process shouldn’t be too difficult, but what about the pre-existing data?
It’s not realistic to get consent from every person that has contributed to the map. When OSM started blank spots were common, not just in developing nations but in developed nations also. Due to satellite imagery and organisations like HOT (Humanitarian Open street map Team), mappers have also been tracing satellite imagery when a physical survey was impractical. As we want to analyse this information how do we gain consent? Should a clause be put into OSM’s licence (this is bad idea)? Is there a differentiation between mappers which trace and those that physically survey, are all maps created equal?
There is scant research out there, namely “Research Ethics for Studying Open Source Projects” and “Internet Research: Privacy, Ethics and Alienation – An Open Source Approach”. Specifically relating to geospatial data and OSM there is none. The only mention relating to OSM is SK53′s post on research relating to OSM being behind a paywall, which is a fair point but still doesn’t answer whether we should be using data without consent. I sense a PhD chapter forming.
Written and submitted from the Nottingham Geospatial Building (52.953, -1.18405)
I recently posted a rather long blog regarding my experience at the London WaterHackathon. Essentially over 48 hours we hacked Ushahidi, adding a triage system for reports. It was fun. Part of my PhD research is about the design and usage of tools in the space of community mapping. People like Muki Hacklay and his extreme citizen science group have made some massive in-roads into verifying the quality of OSM data, with very positive results for OSM. But this doesn’t say anything about the quality of the tools. In a previous life I trained as a computer programmer, I moved into geospatial afterwards, hence it leaves me with a few questions regarding the process and its output.
The software engineering rulebook was thrown out of the window with good enough is perfect taken as the mantra. Although I’m a great believer in the idea that something good and tangible can be created out of an all night – look at any of my coursework from my comp sci bachelors – I’m concerned about the sustainability of the code that it generates. While I have every intention of using the code from the hackathon in something very useful the design came about from a 15 minute brief with around nine whiteboards being used and data structures made on the fly. This especially applies to the SMS solution we wrote, it bellies the importance of good documentation.
From all of this, I would ask the following questions…
- How sustainable is the code developed at a hackathon?
- Are there any examples of code developed at hackathons being deployed directly with modification?
- What steps are needed before hackathon code can be used?
Being the effect/de facto project/product manager I found the need to keep my team productive involved constant resupply of pizza, coffee and biscuits. The idea being that content and happy coders are productive ones. With the results this is definitely true. It also helped that the team was awesome, throwing themselves into the project with great gusto. Again in hindsight I wonder if a more considered approach was necessary, thinking about sustainability and reuse. We did follow the mantra of JFDI (look it up on urban dictionary), but in forging a massive path ahead have we inhibited further growth and expansion?
Are there design considerations and best practices to follow in hackathons, effectively creating good code that could be reused and used effectively i.e. ergonomics. If not where would be a good place to start?
Written and submitted from the Nottingham Geospatial Building (52.953, -1.18405)
On Friday 21st of October I attended my first hackathon. For those unfamiliar with hackathons they’re events with the idea of sticking a load of intelligent and driven computer hackers and programmers in a room for a period of time and see what drops out. The format of the London Hackathon was that for 48 hours at UCL groups would produce technology demonstrators and designs to solve global water problems.
The execution follows an introduction by the organisers framing the exercise with problem statements and presentations by experts wanting to solve different problems, people skyped in or did them in person. The problems presented all came form the Random Hacks of Kindness website and ranged from a water trading platform to my own issue of public service infrastructure and community mapping – my PhD.
Once the ideas where presented it was up to the developers to choose their projects. I had some awesome and – as my friend Josh Goldstein would say – ‘pumped’ developers with skills ranging from PHP to Python. The idea is essentially ‘Fix My Street’/Open311 for less developed countries. From this I started to hoover up whiteboards like they were going out of fashion. On them trying to get a design spec on workflow through the potential system, how triaging of reports would work. We were left with some outstanding design to be done, but nothing too complicated; or so we thought at the time. The idea was to create this system so it can be used as a technology demonstrator to the Tanzanian Commission of Science and Technology (COSTECH).
When I was last in Tanzania, COSTECH was being bounced around by the IT community. In Nairobi, the iHub acts as a lightning rod for technology and IT supporting local and foreign developers. The iHub model is excellent and really drives technological innovation in East Africa, other initiatives include the Hive Colab in Kampala and Bantalabs in Senegal. The people that run the iHub also run Ushahidi. Ushahidi is a platform/CMS designed for the crowdsourced reporting of issues, its inception was due to the Kenyan election crisis of 2008 and since has been used to report on Flamingos to the recent ‘occupy’ movements. Under the bonnet it’s a PHP website developed with the Kohana framework. It was here where we hit our first issue; none of our developers had worked with Kohana before. We split into two groups, one figuring out Kohana, the other designing workflow.
The first issue faced by the newly developed team was developing environment and server access. While most of the developers (myself included) were familiar with the programming languages needed, it wasn’t our primary choice. Personally I haven’t hacked about with PHP since October – December 2005, for others the experience seemed the same. Installing Ushahidi also proved problematic. Issues with mod_rewrite and other PHP extensions were experienced, but were eventually fixed. This wasn’t a problem which Ushahidi per se, though due to some very strange address rewriting configuring mod_rewrite was necessary for all. As we didn’t have server access we remote hosted, using one of the developers personal servers.
Once the workflow was sketched out we presented back to the developers, who had been exploring the Ushahidi plugin ecosystem. We integrated ‘actionable’, ‘simple groups’. Actionable was used to ‘action’ reports, and place them in a triage system. Simple groups was used to simulate a ‘team’ of fixers. Fixers was used generically as the people fixing the reported problems, however the dynamic of how this would be worked wasn’t considered at this stage. Currently in Tanzania, in Dar Es Salaam and Zanzibar a number of reporting/monitoring projects are in the pipeline. The next steps are to interface with local government, private companies and citizens to develop public services.
We started to have a good interface, with tasks and problems being received and triaged. The workflow started to come together with reports being verified, on verification being triaged, going to the imaginary team of fixers and finally reaching conclusion or dispute resolution. The tabs to accommodate these were integrated into Ushahidi.
Now reports were able to be triaged we focused on expanding the reporting mechanism. Out of the box Ushahidi supports reporting through a web-based form, twitter and through its mobile applications (iOS, Android, Java and Windows Mobile 6). It can interface with SMS gateways like FrontlineSMS, we wanted to use SMS due to the ubiquitous nature of feature phones in Africa that realistically can only use SMS as a form of reporting. Using SMS presented problems of geo-locating the messages. Theoretically it’s possible for the telecos to triangulate the position of the sender and supply a latitude and longitude but this isn’t practical over a 48 hour hackathon notwithstanding the ethical and privacy concerns.
The solution we came up with, kudos to Dirk Goriseen, was to create a 100m2 grid and have a 10 digit reference for each grid square that people could text in. This would be prefixed by a hash (#) then found through a regular expression in the submitted message. Obviously questions remain when implementing this on a large scale namely ensuring local people know what their code is and creating a reference system that conforms to the human geography not just the physical. Integrating the SMS into the system was more problematic. We found that FrontlineSMS on the face of it doesn’t work in the cloud – if that is wrong please correct me! So Caroline Glassberg-Powell (Caz, our PHP guru) phoned a friend of hers, Daniel Fligg to hack an SMS gateway together.
Now it’s about 2100. We’ve been working for roughly nine hours and out project and code is starting to take shape. Teutonic reinforcements then arrived, all of them with Android mobile phones, none of them came with any Android development experience but did come with a willingness to learn about developing the Android application. This process entailed downloading Eclipse, setting up the Android SDK etc. It was a process that I feel should have been simpler, some issues with devices not being recognised, Eclipse’s idiosyncrasies (like interfacing with GitHub) were encountered.
About two hours later, once an Android development environment had been setup on each of the laptops we forked the Ushahidi Android app on GitHub and started development. While the stock app is very good we wanted to add the functionality of being notified when your report changed its status. To do this, avoiding login and passwords we wanted to use a unique identifier. Android, within its SDK supports a unique identifier and has specific calls for it. We later figured out that it’s not implemented across all versions and devices, with some choosing to not make that part of the SDK available. Quite why this is the case, the rationale doesn’t seem clear and from a design aspect it seems quite stupid, but the situation is as it is so we hack!
Our solution was to create a 256bit random hash assigned when the user opens the application for the first time. This would then be included in the reports to identify which reports come from which device (with the potentially flawed logic of one person uses one device/one device is used by one person). So instead of seeing all the reports in the system you just see the progress of reports that you have submitted. Unfortunately this process went on into the early hours, at which point sleep was necessary.
None of us wanted to use the night buses to go home so we slept in the working space. We fashioned little cubicles from the whiteboards, with chairs providing a bed. The sleep came easy, but when the three that remained woke we were all quite cold, in need of some more sleep and shower. However that doesn’t detract from more hacking!
On waking up the majority of the web-based system was already completed. During the night the SMS gateway had been magically integrated during the night. What remained was obvious bug-fixing, tidying up code and starting documentation.
The Android app had started to encounter real difficulties, it wasn’t compiling, even when reverting back to code which was known to be ‘fine’. This process continued from 0930 – 1400. The issues with Android turned out to be problems within the Eclipse development environment. The code compiled and ran – with our adjustments – for the first time during the presentation. It was good timing.
We presented the slides above to the judging committee. We won. The credit to this victory goes to the people involved in the project, because of the numbers I guess we had covered the most amount of ground. However this isn’t trying to detract from the Herculean efforts of the programming team and other participants in the hackathon. Participation in our project wasn’t strict, people flitted in and out on the basis of their skills, in all about 4/5 people could be considered to be core/’never say die, I’m here till the end’ members. But the entire project had around 10-12 people that scribbled on the board, gave advice or rolled up their sleeves and hacked a little. This blog is dedicated to all of them.
Photos from the event can be found on Flickr here: http://www.flickr.com/photos/markiliffe/sets/72157628083354348/
Written and submitted from the Nottingham Geospatial Building (52.953, -1.18405)
I have been surprised with during the Tandale project with how community members are familiar with the geographic boundaries and extent of their community. On touring areas with community members, it was clear how they used geographic features to navigate. Also the administrative boundaries were formed through natural features like rivers, without being imposed upon by an outside force.
Previously when facilitating mapping (essentially “There isn’t anything here, go have a look”) the mappers would have a difficultly in collating the map and their own mental model. Here in Tandale reading of seemed to be a lot easier than when I have previously experienced.
Map reading is a difficult skill, essentially it starts with understanding that the map is an abstract representation of space. As a map is a representation of space; a visualisation of various elements is at the whim of the cartographer (in our case the esteemed people which write the map styles for OSM) and the data that the cartographer/surveyor collects.
The people of Tandale seem to have a spatial awareness down to a very precise art, using landmarks and features with which to demarcate areas. This also relates to official and unofficial landuse within Tandale. Because of the lack of formal solid waste collection, most of the waste is dumped in swampland or wasteland. Unfortunately these waste areas have no buffer with residential homes, further illustrating the potential for disease.
On a tour of the Sokoni sub-ward executive he spoke at length on sanitation and water security. The conversation turned to common diseases and illnesses within Tandale with Malaria and HIV unsurprising common. Also mentioned in the same breath was Typhoid and Cholera, which, according to the officer, outbreaks are common. Looking at the state of sanitation and drainage this is very believable.
We believe that the first step to solving these problems is to have a map. Hopefully have enough data to give evidence of the problems, to both inside and outside the community. We have also mapped dumping grounds, formal and informal medical facilities, toilets, water points among many other things. Using the map as a basemap in Ushahidi instance allows for the community members to use the map that they have created. Now our focus turns to completing the feedback loop, so there is an interface for the reports. Funnily enough that’s where my PhD comes in…
Written and submitted from the City Style Hotel, Sinza, Dar Es Salaam (-6.47319,39.13199)
For the interview for my PhD program the head of the Mixed Reality Lab Steve Benford and Holger Schnädelbach sat on the panel and gave me a grilling while being quite nice about it. Considering I’d put the application in about 2 hours before the deadline and I’d mentioned in the application I wanted to do a PhD because of Top Gear, I was surprised I got an interview.
During the interview I was asked about projects I worked on, I spoke about counting flamingos with mobile phones and spending too few a time with the Map Kibera project. On asking my research interests I cobbled together something about interactions with crisis maps and the emerging field of crisis mapping. Then they asked me if I wanted to ask any questions so I asked about the opportunities of studying abroad. It seemed as if five minutes had passed, whereas 30 minutes had gone by.
We then went for a tour of the facilities, saw the after which, I caught the train home. I wasn’t hopeful for a place, afterwards things started to ruminate, “I should have said this differently/mentioned that/not said that” etc etc. I went back to the lab where I was finishing the dissertation for my masters and just got on with it. I got an email the next day asking if I wanted a four year place, unconditionally. That was a happy day.
I’m framing the history of how I got the place onto the PhD program because I’ve been doing a lot of soul searching about the PhD and its processes. When presenting my viva I presented questions regarding the automation of trust in crisis reports. The idea being that the crowd is submitting reports to platforms like Ushahidi and other forms of social media, but not all of it can be trusted. What can we do to resolve these issues?
This came about due to one of the DTC’s staff asking me “What is your dependent variable?”. To which I pondered, thought around it and came up with measuring some mcguffin that would reduce the error in the reports. I hadn’t come up with a methodology, I figured I’d work one out as I went along.
Along these lines I started to think about the spatio-temporal distribution of reports. Could the reports be categorised by understanding the locality of where the report originates. At first glance yes, then understanding how people report becomes important. I was originally intending to look at crises like the 2007 Kenyan election crisis, the 2010 Egyptian elections; by understanding the the features of the reports made during the aftermath of these problems could that inform the understanding of future data.
However in these countries permeation of technology isn’t as high as in Western world. While Kenya and Huawei have introduced the IDEOS Andorid phone with GPS, 3G and all the features you would expect on a smartphone, cheap Nokias and generic Chinese phones still dominate the market. This would mean that the reports would need to be geo-located without a GPS; therefore relying on enough geo-information with which to place the report. This wouldn’t be the time to talk at length on vernacular geographies, I’ll leave that Julian Rosser and Tim Waters, but it would be fair to say locating a place precisely from the vernacular is quite difficult.
Presuming that I would be able to geolocate reports, how would the analysis – my new and novel contribution work? I had planned to look at time-series analysis, moving on to agent based models and bayesian networks to classify the reports. From this I would hopefully have a toolkit/method to automate the assignment of trust/reduce the uncertainty in the reports.
The missing link however, was that after people have made the reports and the trust/uncertainty value is assigned, then what? The Ushahidi platform is widely used to map crises but is used to map power cuts, count flamingos. However I have not found (and this is open invitation for anyone to make me aware of such content) how Ushahidi’s (political) crisis mapping has been used to complete a feedback loop. By this I mean end to end delivery of service. For example a bomb has exploded causing panic. The responders then used this information to gain situational awareness of the issue to provide the appropriate response. This research could possibly take the form of an ethnographic analysis of a crisis situation, a human factors study of the sense making/situational awareness requirements a framework of the interactions between the technology and humans in a HCI-esque study.
Within the research literature this analysis doesn’t exist, due to this I have no foundation to say how good my work would be. In my search for a dependent variable, I’ve ended up without one, and at a dead end. So what now? In hindsight most of this wasn’t good idea, however a common issue faced by those starting a PhD is that we don’t have 20:20 hindsight.
Before going on a sojourn to Tanzania for the Tandale mapping project I had the meeting with supervisors to discuss my thoughts. They agreed with what I had said, asking the way forward. From this I now intend to look at the role of technology in mapping in deprived environments. This will take an ethnographically informed approach to create a framework how how such mapping can be used for a feedback loop of service delivery.
In English and not academic psycho-babble this means I will look at the emergent field of community participation for mapping (like Map Kibera, Map Mathare and, now, Map Tandale among others), the technology requirements for supporting these activities and if new tools can be created to improve these activities. Any new tools would then be tested against each other, creating a dependent and independent variables. Thus an output/novel contribution would be an empirical study where one tool is better than the other is analysed supported by other research (policy orientated) on the different methods creating the tools.
In taking this approach I hope to inform the process of service provision either by adapting tools or creating my own. It may seem like starting at square one, however understanding the process of a PhD and the right questions to ask weren’t covered in previous education. My view of the PhD is still one of reverence but by reading theses, going through the supervision process I have a rare moment of clarity in that I know what I need to do next. So time for a Kilimanjaro, unfortunately at the Hyatt.
Written and submitted in the World Bank Offices, Dar Es Salaam, Tanzania (-6.81298, 39.29194)
Asking “what is your question?” brings fear to a PhD candidate. Currently the question was “How to reduce uncertainty in citizen reported crisis information”. It’s already been hacked around removing ‘trust’ (viewed as buzz word and something not ethically viable) and has been replaced with uncertainty. It’s certain to be further refined ending in understanding what I’ve actually answered in process. To form our questions/at least have a starting point for the process we enjoyed a writing retreat. This entailed going to a stately home in the Leicestershire countryside . I kept the piece of paper on which the question was formed, have a look at the link below (don’t worry it’s not a rickroll).
I spent the past week getting back in touch with the biker inside, unfortunately with a trip to Blackpool. After this misadventure onwards to Lancaster for hotpot and DocFest 2011. DocFest is a conference for PhD students within the Digital Economy program. It brings a plethora of different disciples from different Doctoral Training Centres with Computer Scientists, Designers and Psychologists represented and everything in-between. Keynotes were given by Sir Chris Bonnington – frankly the best and most inspirational presentation I’d ever seen, Alan Dix – a HCI guru/fruitloop and Marc Huijbregts the digital director of Saatchi and Saatchi. Marc presented some of the most weird and wonderful charts I’d ever seen “the slope of disillusionment” anyone?
Meeting other people from the many digital economy DTCs was good, especially meeting people who you follow on twitter in real life.
One of the workshops among the keynotes was collaboration, on the team were two sterling Horizonauts, Tim Pearce and Anthony Cousin and the Research Council UK director for the digital economy, John Baird. We were tasked to come up with a haiku to summarise life in our PhDs, our entry won joint first. I think I think if the research doesn’t work out maybe a career as a learned poet is on the cards…
The research transforms my mind
Where is my question?
The first supervision meeting of a PhD is a big experience, especially with all the minds around the table. It was universally agreed that an interesting PhD is to be found in the fragmented mess that is the proposal, though that is being hacked to pieces to pass muster. The process will no doubt get there, hopefully sooner rather than later.
From my perspective one of the most interesting things to come out of the proposal process so far is the inability for other academics to name the same thing. So far the research could fall under ‘Crisis Mapping‘, ‘Time Critical Volunteered Geographical Information‘ (aka crowdsourcing) and ‘Crisis Informatics‘.
While the aims of this PhD are to interdisciplinary, eventually a contribution needs to be solidly made to a particular field, as such it needs to be within the vocabulary of that field (or does it?). For the time being we’re taking ‘Crisis Informatics’ to be the mcguffin that defines the PhD, but formalising how each of the terms and their definitions combine and affect the future research is something that I’ll be also working on in future, adding to the list of things to do!
Written and submitted from the Nottingham Geospatial Building (52.953, -1.18405)
“How to establish trust in citizen reported human crisis reports” is my PhD question. So far. Ushahidi reports on ‘crisis’ (need to define that), in the vague field that is crisis mapping. I’ve had a few ideas which I’ve spoken at WhereCampEU, namely on the classification of crisis and trust in the reports of events on the ground.
In this I think that there are four categories of crisis, across the axes of severity (acute to severe) and time (a few seconds to ongoing indefinitely) – On presenting this at WhereCampEU a comment was made I was waiting for the good bit of news, then I realised it’s all bad! Most of the attention from the community is, seemingly, in mapping at midlevel to severe crisis over a medium time-scale. Examples of these sorts of crisis would include the Kenya election of 2007 and the Arab spring uprisings of 2011 in Tunisia and Egypt; something happened in a few weeks the crisis was ‘resolved’ – I’m not suggesting that the problems are solved, but they’re not in a state of active warfare.
Topics like earthquakes and floods would fall into roughly the same category as the uprisings with critical immediate effects, whereas bombings and its ilk would be severe but over quickly, also the spatial distribution of reports probably wouldn’t be as great as some disasters like civil wars.
I believe it also important to stress the whole process isn’t about the map creation process, a map is a representation of the world, not the physical one. I’m increasingly feeling that though the outside community response tracing roads and POIs from map tiles was invaluable in Haiti, I feel reports (SMS/Twitter/Web) generated by the crowd within the crisis zone is of better value. From this a context map of what is going on in certain areas could be raised, though I’m still considering how this can be visualised, and fit into a useful chain of use.
From this rambling, over the next six months I’m aiming to do some statistical experiments on crisis data, looking at which mediums people use to report, the percentages of which are verified based on input method (SMS/Twitter/Web/etc). I would also like to see a definitive case study of the ‘supply chain’ of information, from the user reporting on the ground to an organisation that uses that report. I believe understanding how useful the reports are to the aid organisations would be a good start on understanding this complex system.
In all, please comment on the ideas presented, or if you’ve seen information around the web or in journals, I would be very interested for similar people to be in touch.
Academia is a funny world, apparently at the cutting edge with research it often finds itself in the shadow of industry, the open-source community and cutting edge blogs. Within my own research within Crisis Cartography massive gaps exist with the epistemology of research within academic scholarship compared to the reality of crisis response. Especially practices extolled by organisations like UNHCR/UNOCHA, MapAction, H.O.T and Ground Truth Initiative among many others.
What needs to be done within the academic sphere to bring itself upto the bleeding edge of crisis response. People like Mikel Maron, Erica Hagen, Patrick Meier have published within academic spheres on crisis cartography, whereas , Dane Springmeyer, Kate Chapman, Erik Hersman and Alex Anderson create and use technology like Ushahidi and Frontline SMS. On the ground at the coal-face so to speak people like
Primož Kovačič, Sebastien Pierrel and Nicolas Chavent provide invaluable information through blogs and surveys.
While that isn’t an authoritative list of people to follow, they are brilliant to follow on twitter, as they are more connected to the bleeding edge than I. From this how can these people their sources of information be brought into the realm of citable/publishable academia, they are a massive untapped resource with more to say than some tweed jacketed pipe lighter. I suppose it all boils down to prose and audience.
One of my PhD ‘themes’ is resilience. Quite how to define this I am unsure, however my working, albeit colloquial definition is; “Keeping stuff going when it’s all going horribly wrong”. This is a broad definition but its sentiment is clear, though within the current bounds it would be stupid to think I’m going to change the world completely, however changing it a bit is the name of the game.
My current research is based around the pervasive monuments research group in Horizon. My chunk involves using data accrued by citizens during the purges containing all a manner of information, dates, what happened, their own personal history etc mashing this up with augmented reality to provide a tool to explore this information using some sort of mobile device all assoiciating the data with a place, regardless of locale (for the moment).
The hobo code was used (and is still used) to provide a personal though inclusive means of hidden communication to those that can decode it. From this the question arises in what new methods of transmitting information within a physical environment with digital markers. There are D-Touch stickers; they can be manufactured using a portable sticker printer and can be decoded both by a mobile phone and by the mk.1 human eyeball. I think they’re great but as being stickers they can be easily removed and aren’t resilient to the prevailing elements.
QR (Quick Response) codes have been printed off and have been integrated into art and are also they are becoming ubiquitous in their nature with Waitrose now placing them into primetime Christmas TV advertising. The issue here is their ability to be decoded and left by humans, placing them on stickers aids in their dissemination but would be on stickers all the same.
As always more questions to be solved than answers, the biggie being how resilient can signs be?