MyTracks GeoJSON Conversion Tool

Deep thoughts and Long Bike rides.

I have been tracking my bike rides for the past few years. These are mainly commuter miles with a few longer bike tours and weekend rides. I have adopted a few apps over the years, but I always liked Google MyTracks the best due to the fact that it gave me access to my stats and a map of my ride.

In the past, I could sync my data with my Google Drive account, and then view a spreadsheet of all of my rides, their stats, and a link to the route on a custom google map. This export was discontinued May 2013.

The current export format is a 5000 line KML file, riddled with markups and tag I don’t need. Where where my stats? Why can’t I aggregate my rides? I was pissed… Until I learned how to parse XML with Processing.

There were several problems with these KML files:

  • A ton of markup info only the app or Google Maps cares about.
  • Every damn coordinate was wrapped in its own tag, formatted with spaces… no array, no commas.
<gx:coord>-73.964595 40.675743 2.0</gx:coord>
  • All of the good stats were in one tag, named <description> in a format that looked like JSON’s evil half baked cousin.
<description><![CDATA[Created by Google My Tracks on Android.
Name: West Side Ride
Activity type: cycling
Description: -
Total distance: 48.65 km (30.2 mi)
Total time: 2:56:52
Moving time: 2:46:59
Average speed: 16.50 km/h (10.3 mi/h)
Average moving speed: 17.48 km/h (10.9 mi/h)
Max speed: 45.90 km/h (28.5 mi/h)
Average pace: 3.64 min/km (5.9 min/mi)
Average moving pace: 3.43 min/km (5.5 min/mi)
Fastest pace: 1.31 min/km (2.1 min/mi)
Max elevation: 38 m (126 ft)
Min elevation: -49 m (-161 ft)
Elevation gain: 925 m (3035 ft)
Max grade: 21 %
Min grade: -27 %
Recorded: 7/13/2013 7:05AM
]]>
</description>

Its my Data. Give it Back.

Well, unlike before, the app is giving me all of the data. But its formatted in such a way, its a real pain to use/plug in to any other program. I created a tool for bike riders who use MyTracks. This tool cleans up all the crap, and outputs nice clean GeoJSON to be used however they see fit. I also give them an HTML file to upload along with their JSON file to their server for a quick aggregated visualization.

A few of my rides loaded into the visualization.

A few of my rides loaded into the visualization.

Ride. Sync. Repeat.

  1. Download the processing sketch/tool here: DOWNLOAD ME
  2. Sync your MyTracks data with your Google Drive Account.
  3. Download all KML Files from the MyTracks folder in Drive and unzip.
  4. Drag and Drop all KML files into the DATA folder of the Processing sketch.
  5. Run the sketch, click in the sketch window, and press any key to save out a full JSON file. *If you don’t click and keyPress, the entire file will not write, and you will not have all of your data in a usable format!
  6. For a quick web visualization tool, upload your ride.json file along with the index.html file to a folder on your server and explore.
  7. Add any additional KML files to the data folder and run whenever you need to update your json file.

 

Well, I changed my mind…RIP Hybridizer.

My initial final project proposal was to create a face hybridizer with two video feeds. After messing with the number of tiles, I realized its hard to get a tile to line up with each facial feature, and a lot of the tiles are sort of wasteful.

One of the class suggestions was to make it a puzzle like this:

I’m on it! So far, I have the tiles shuffling randomly, and a grid to drag and drop on to. The grid of divs is dynamically created the same way the tiles are so later, I can ramp up the difficulty.

Screen Shot 2013-12-03 at 10.38.40 AM

Check it out here.

Next Steps:

  • Drag and drop snap to grid.
  • Button to up diffculty (number of tiles)
  • Timer?
  • The tiles and grid divs are numbered by location when created. Maybe I can come up with logic to check and see if the puzzle is solved.

 

Hybridizer Part 1

For my final, I was thinking of exploring hybridizing two faces by allowing users to drag and drop features from one video to the other. Right now, I have one WebRTC video writing as tiled draggable canvas elements. The next step is to add in another user’s video, and allow the users to swap features making a face mosaic. Ideally, I use the OpenTok api, but this weekend I abandoned it when I hit a few snags.

Heres what I’ve got so far:Screen Shot 2013-11-18 at 2.09.10 PMScreen Shot 2013-11-18 at 2.12.20 PM

 

Next steps:

  • Right now, I’m not really using any server side code or sockets logic. I need to get that going so we have at least 2 people’s faces showing.
  • Implement Peer JS

Ideal User Flow:

  • User arrives at page and is prompted to enable camera.
  • After that is approved, the user is asked to enter a user name.
  • Once they enter user name, other user’s video feeds are shown.
  • At either the center, or bottom of the screen is a grid for dragging and dropping features from each user to create a hybrid face.
  • If everything works out in time, i’d like to find a way to save the hybrid faces for download.

 

 

Data Representation Final Project Proposal

I have been using Google MyTracks to record my bike data for the past 3 years. In the beginning, this app offered more functionality and access to my data than any other Bike GPS phone program. Unfortunately after an update this past Spring, what used to be a spreadsheet, fusion table, and map file on Google Drive has become a series of KML files where the data is not easily accessible. Time to get my data back, visualize all of the last year of my rides over time and in total, and then put this app and data set to rest and move on to a more stable workout tracking method.

Phase one is web. This week, I have built a processing app that reads out what I need from the KML files and outputs each ride as a JSON object that looks just like a Python dictionary. This works for me web wise with MongoDB, but I may also consider outputting a GeoJSON formatted version of the data and working with Leaf.js for an initial interactive visualization.

{
'name':'5/7/2013 5:58PM',
'distance':5.8 ,
'time':1806,
'average speed':11.5,
'max speed':39.1,
'average pace':314,
'points: [
[-73.964319,40.675937,18.0],[-73.964302,40.67591,12.0],[-73.964382,40.67582,12.0],[-73.964427,40.675812,14.0],[-73.964624,40.675657,7.0],[-73.964617,40.675684,9.0],[-73.964632,40.675701,10.0],[-73.964584,40.675718,13.0],[-73.964577,40.675721,14.0],[-73.964524,40.675717,12.0],[-73.964493,40.675709,12.0],[-73.964446,40.675698,12.0],[-73.964398,40.675689,12.0],[-73.96435,40.675681,12.0],[-73.964299,40.675674,12.0],[-73.964247,40.675667,12.0],[-73.96419,40.675659,13.0],[-73.964134,40.675653,13.0],[-73.964079,40.675645,13.0],[-73.964024,40.675635,13.0],[-73.963969,40.675624,13.0],[-73.963913,40.675615,13.0],[-73.963855,40.675607,12.0],[-73.963798,40.6756,12.0],[-73.963742,40.675593,12.0],[-73.963689,40.675585,11.0],[-73.963638,40.675576,11.0]............]
}

Phase two is physical topography models based on the areas of the city I’ve ridden most frequently. These may become a hybrid of the actual topo map of the city and my activities, but initially, I’d like to develop some logic for creating a topographical self portrait based solely on my geopoints.

topoThis is something I did in undergrad combining 2 very different square mile topographies manually in Illustrator (cause I’m insane) which was then built as an actually topo model in the pre-laser cutter days (again, cause I’m insane).

 

My Latitude Sonified

For an initial exercise in data sonification, I extracted the latitudes from all of my OpenPaths data since September this year. The low familiar tone is my neighborhood in Brooklyn, any deviation from that is a commute by bike or subway to Manhattan. Sounds like a creature of habit who likes to work from home.

NYT API V1

For some reason, my background is not refreshing…leading to a fuzzy font. My apologies.

This week we explored the New York Times Article Search API. In an effort to better understand the query string construction process, I constructed my own query string and loaded it into a Processing JSONObject.

First I queried the word discrimination, and retrieved the geo_facets, and per_facets associated with the articles.

Then, I decided I wanted to know what kind of discrimination was discussed in each article so I pulled out the titles. Unfortunately, this is also when I started getting out of bounds errors and discovered the API only returns 10 articles at a time. In order to get more, I had to do multiple queries and increase the offset each time.

int offset = 0;
 loadJSON(endpoint, offset, apiKey);

I then displayed 80 articles and their titles in chronological order to get a sense of well… what people were discriminating against at different times.

list

I then returned to my original idea, and went back to focusing on the geo_facets for discrimination. Pulled out the geo_facet terms and counts and put them in an IntDict and sorted them by values. I did this for the years 1994-2003 and stepped through results to get a sense of what had areas discrimination issues reported in the Times and how the locations and quantities changed over time.

void loadJSON(int start, int end) {
 IntDict tempDict = new IntDict();
 String yr = ""+start;
 yr = yr.substring(0, 4);
println(yr);
 String endpoint = "http://api.nytimes.com/svc/search/v1/article?format=json&query=discrimination&facets=geo_facet&begin_date=" + start + "&end_date=" + end + apiKey;
 JSONObject myJSON = loadJSONObject(endpoint);
 JSONObject facets = myJSON.getJSONObject("facets");
 JSONArray results = facets.getJSONArray("geo_facet");
 //Go through the array and access every individual article
 for (int i = 0; i < results.size(); i++) {
 JSONObject facet = results.getJSONObject(i);
 //Get the properties object
 String term = facet.getString("term");
 int count= facet.getInt("count");
 tempDict.add(term, count);
 }
 Element e = new Element();
 e.yr = yr;
 e.locDict = tempDict;
 elements.add(e);
}

Okavango Heart Rate Day 10

JohnOnly I started off mapping John’s heart rate to y axis, and time to x axis. I then added in sightings to compare spikes in HR to animal sightings. (If there was a crocodile sighting, its green and larger.) Many of Johns readings were missing a heart rate.

I noticed only a few of the sightings were showing up, so I printed out the starting and ending times for each data set to compare. Sightings started way earlier in the day than the HR readings for John. To have a better view of the whole picture, mapped all three men’s HR to compare with the sighting times. Since Steve had the most consistent readings, the Start and End times used for the mapping are his.

3People

/*
Okavango Heartrate Day 10

http://intotheokavango.org/api/timeline?date=YYYYMMDD&types=TYPE


http://intotheokavango.org/api/timeline?date=20130916&types=sighting

 */
import java.util.Date;
import java.text.SimpleDateFormat;
ArrayList <Beat> sbeatList= new ArrayList();
ArrayList <Beat> jbeatList= new ArrayList();
ArrayList <Beat> gbeatList= new ArrayList();
ArrayList <Sighting> sightList= new ArrayList();
String beatUrl = "http://intotheokavango.org/api/timeline?date=20130916&types=ambit";
String sightUrl = "http://intotheokavango.org/api/timeline?date=20130916&types=sighting";
color croc = color(24, 255, 40);
float maxHeight = 100;
void setup() {
 size(1280, 720, P3D);
 loadJSONbeat(beatUrl);
 loadJSONsight(sightUrl);
 plotPoints();
}
void draw() {
 background(0);
 beatLine();
 for (Beat b : sbeatList) {
 b.display();
 }
 for (Beat b : jbeatList) {
 b.display();
 }
 for (Beat b : gbeatList) {
 b.display();
 }
 for (Sighting s : sightList) {
 s.display();
 }
}
void beatLine() {
 //thin line
 pushStyle();
 strokeWeight(1);
 noFill();
 stroke(255, 40, 40, 150);
 beginShape();
 for (int i = 0; i< sbeatList.size(); i++) {
 Beat b = sbeatList.get(i);
 vertex(b.pos.x, b.pos.y, b.pos.z);
 } 
 endShape();

 beginShape();
 for (int i = 0; i< jbeatList.size(); i++) {
 Beat b = jbeatList.get(i);
 vertex(b.pos.x, b.pos.y, b.pos.z);
 } 
 endShape();

 beginShape();
 for (int i = 0; i< gbeatList.size(); i++) {
 Beat b = gbeatList.get(i);
 vertex(b.pos.x, b.pos.y, b.pos.z);
 } 
 endShape();

 popStyle();
//animate johns beats
 pushStyle();
 noFill();
 strokeWeight(3);
 stroke(255, 40, 40);
 int tail = 3;
beginShape();
 for (int i =0; i<tail; i++) {
 Beat b =jbeatList.get((i+frameCount/2)% jbeatList.size());
 vertex(b.pos.x, b.pos.y, b.pos.z);
 }
 endShape();
 popStyle();
}
void plotPoints() {
int start = (int) sbeatList.get(0).date.getTime();
 int end = (int) sbeatList.get(sbeatList.size() - 1).date.getTime();
 //plot Steve's rate (most consistent)
 for (Beat b:sbeatList) {
 float t = (int)b.date.getTime();
 float x = map(t, start, end, 100, width);
 float y = (map(b.hr, 1, 3,100 , 340));
 b.pos = new PVector(x, y, 0);
 }

 //plot John's rate (most consistent)
 for (Beat b:jbeatList) {
 float t = (int)b.date.getTime();
 float x = map(t, start, end, 100, width);
 float y = (map(b.hr, 1, 3, 240, 480));
 b.pos = new PVector(x, y, 0);
 }

 //plot GB's rate (most consistent)
 for (Beat b:gbeatList) {
 float t = (int)b.date.getTime();
 float x = map(t, start, end, 100, width);
 float y = (map(b.hr, 1, 3, 480, 720));
 b.pos = new PVector(x, y, 0);
 }
for (Sighting s:sightList) {
 float t = (int)s.date.getTime();
 float x = map(t, start, end, 100, width);
 float y = 50;
 s.pos = new PVector(x, y, 0);
 }
}
void loadJSONbeat(String url) {
 SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss'+0200'");
 //Load the JSON
 JSONObject myJSON = loadJSONObject(url);
 //Get the features array
 JSONArray features = myJSON.getJSONArray("features");
 //Go through the array and access every individual feature
 for (int i = 0; i < features.size(); i++) {
 JSONObject singleFeature = features.getJSONObject(i);
 //Get the properties object
 JSONObject properties = singleFeature.getJSONObject("properties");
 //Get the persons name
 String person = properties.getString("Person");
 //Get the hr for Steve, his HR recordings were most frequent
 try {
 float heartRate = (float)properties.getFloat("HR");
 String dateString = properties.getString("DateTime"); 
 //println(dateString);
 Date date = sdf.parse(dateString); 
 //println(dateString + " = " +date.getTime());
 if (date != null) {
 Beat b = new Beat (heartRate, dateString, date);
 if (person.equals("John")) {
 jbeatList.add(b);
 }
 if (person.equals("Steve")) {
 sbeatList.add(b);
 }
 if (person.equals("GB")) {
 gbeatList.add(b);
 }
 }
 }
 catch (Exception e) {
 //println ("error parsing date" + e);
 }
 }
}

void loadJSONsight(String url) {
 SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss'+0200'");
 //Load the JSON
 JSONObject myJSON = loadJSONObject(url);
 //Get the features array
 JSONArray features = myJSON.getJSONArray("features");
 //Go through the array and access every individual feature
 for (int i = 0; i < features.size(); i++) {
 JSONObject singleFeature = features.getJSONObject(i);
 //Get the properties object
 JSONObject properties = singleFeature.getJSONObject("properties");
try {
 String animal = properties.getString("Bird Name");
 String dateString = properties.getString("DateTime"); 
 //println(dateString);
 Date date = sdf.parse(dateString); 
 //println(dateString + " = " +date.getTime());
 if (date != null) {
 Sighting s = new Sighting();
 s.animal= animal;
 s.dateString= dateString;
 s.date = date;
 if (animal.equals("Crocodile")) {
 s.c = croc;
 s.sw = 8;
 println(date.getTime());
 }
 sightList.add(s);
 }
 }
 catch (Exception e) {
 //println ("error parsing date" + e);
 }
 }
}
void keyPressed() {
 if (key == 's') {
 saveFrame(frameCount+".jpg");
 }
}
class Sighting {
 PVector pos = new PVector();
 String animal;
 String dateString;
 color c = color(255, 204, 0);
 float sw = 3;
 Date date;
void display() {
 pushMatrix();
 translate(pos.x, pos.y, pos.z);
 pushStyle();
 strokeWeight(sw);
 stroke(c);
 point(0, 0, 0);
 popStyle();
 popMatrix();
 }
}
class Beat {
PVector pos = new PVector();
 float hr;
 String dateString;
 Date date;
Beat(float _hr, String _dateString, Date _date) {
 hr = _hr;
 dateString = _dateString;
 date = _date;
 }
void display() {
 pushMatrix();
 translate(pos.x, pos.y, pos.z);
 strokeWeight(2);
 stroke(255, 40, 40);
 point(0,0,0);
 popMatrix();
 }
void update() {
 }
}

 

Socket.io Pictionary Round 1

Instructions:

Artist:
Click and drag mouse to draw on canvas.
Clicking the clear button will reset the cavas.
Watch for players answers on the right.
Double click the picture of the payer who guessed correctly to delcare a winner.

Players:
Enter your guesses in the message box on the top of the screen.

If you don’t see your image in a box on the right, you are just an observer for now.

Lets Play!

Issues:

  • Still a work in progress, but I’ve got the basic play flow working for 5 players at this time.
  • One hurdle was creating a mousedragged event listener by combining mousedown and mousemoved.
  • If I used only the mousedown event the drawing would be very polygonal and choppy.
  • If I used only the mousemoved event all mouse movements were recorded and it can get scribbly really quickly.
  • The workaround for now is when the artist has mousedown and mousemoved small ellipses are drawn. Not ideal, but I needed to avoid a continuous path. This will be the next thing I attempt to improve.

Next Steps:

  • Wait for 5 people to connect before allowing game to begin.
  • Allow for multiple game rooms at once.
  • Better winner declaration.
  • The hot/cold hinting Surya had suggested.
  • Better drawing experience.

 

Latitude is Y, Longitude is X

  • Strike location mapped from Lat, Long.
  • Strikes load in chronological order glow fade and are destroyed to keep arrayList size manageable.
  • Strike amplitude mapped to radius.
  • Height x Lat and Long x Height histograms.

So I chose the lightning mondo csv file. After looking at that headers in terminal I found which columns were Latitude, Longitude, Height and Amplitude. I have never mapped geo data before so I found a library called googleMapper to help me get the map images and a few helper functions like converting latitude and longitude to cartesian coordinates. I also took a stab at 3D and the peesyCam library.

My first sketch used a small png with its size mapped from the amplitude of each strike. I also used additive blending mode, which made the more frequently struck areas brighter over time. Only problem was I would lose my map after the first frame due to additive blending.

157

 

My second sketch used a ellipse whose radius and alpha were mapped from amplitude and a line in the Z axis whose length was mapped from the height of the strike. Unfortunately, this showed me that North America almost always has this data set where much of the rest of the world did not record height.

Screen Shot 2013-10-21 at 9.49.05 AM

Well, don’t name your variables LAT and LONG when you are passing them in as Y and X positions. The habit of saying latitude and longitude in that order wound up giving me a crazy inverted map of the strikes. I figured out after re running my test pin code, that I was flipping X and Y:

//pin brooklyn on the map to test accuracy.
float Tlat = (float)gMapper.lat2y(40.7111);
float Tlon = (float)gMapper.lon2x( -73.9565);
fill(255,0,0);
ellipse(Tlon,Tlat,20,20);

Heres the code:

/*
headers:
 FlashPortionID,FlashPortionGUID,FlashGUID,Lightning_Time,Lightning_Time_String,Latitude,Longitude,Height,Stroke_Type,Amplitude,Stroke_Solution,Offsets,Confidence,LastModifiedTime,LastModifiedBy
 */
BufferedReader reader;
String line;
ArrayList <Zap> zapList = new ArrayList();
PImage map;
PImage img;
import googlemapper.*;
import peasy.*;
String filename="";
boolean mapExists = false;
int counter = 0;
int savedTime;
int totalTime = 60000;
PeasyCam cam;
GoogleMapper gMapper;
void setup() {
 size(1000, 1000, P3D);
 img = loadImage("texture.png");
 map = loadImage("map.jpg");
 cam = new PeasyCam(this, width/2, height/2, -100, 900);
 cam.setMinimumDistance(100);
 cam.setMaximumDistance(2000);
double centerLat = 0;
 double centerLon = 0;
 int zoomLevel =2;
 String mapType = GoogleMapper.MAPTYPE_SATELLITE;
 int mapWidth=1000;
 int mapHeight=1000;
gMapper = new GoogleMapper(centerLat, centerLon, zoomLevel, mapType, mapWidth, mapHeight);
 reader = createReader("lightning.csv");
}
void draw() {
try {
 line = reader.readLine();
 } 
 catch (IOException e) {
 e.printStackTrace();
 line = null;
 }
 if (line == null) {
 // Stop reading because of an error or file is empty
 noLoop();
 } 
 else {
 if (random(100) < 10) {
 String [] z = split(line, ','); 
 float lat = float(z[5]);
 float lon = float(z[6]);
 float h = float(z[7]);
 float amp = float(z[9]);
lat = (float)gMapper.lat2y(lat);
 lon = (float)gMapper.lon2x(lon);
 Zap l = new Zap(lat, lon, h, amp);
 zapList.add(l);
 // if (zapList.size() > 3000) {
 // zapList.remove(0);
 // }
 }
 }
background(0);
 image(map, 0, 0);
 fill(1, 1, 30, 175);
 noStroke();
 rect(0, 0, width, height);
for (int i = zapList.size()-1; i >= 0; i--) {
 Zap z = zapList.get(i);
 z.render();
 z.update();
 if (keyPressed == true) {
 if (key == 'h') {
 z.hloc.z = z.h;
 }
int passedTime = millis() - savedTime; 
 if (passedTime > totalTime) {
 counter++;
 saveFrame(counter + ".jpg");
 savedTime = millis();
 }
 }
 }
}
class Zap {
 PVector loc;
 PVector hloc;
 float h;
 float rad;
 float alpha;
Zap(float _lat, float _lon, float _h, float _amp ) {
h=map(_h, 0, 20000, 10, 100);
 loc = new PVector(_lon, _lat, 0);
 hloc = new PVector(_lon, _lat, 0);
 if (_amp < 0) {
 _amp=-(_amp);
 }
 alpha = map(_amp, 0, 100000, 50, 100);
 rad = map(_amp, 0, 100000, 2, 5);
 }
void update() {
 loc.lerp(hloc, 0.1);
 }
 void render() {
pushMatrix();
 translate(loc.x, loc.y, loc.z);
noStroke();
 fill(255, 215, 110, alpha/2);
 ellipse(0, 0, rad, rad);
 stroke(255, 170, 10, alpha/2);
 strokeWeight(1);
 line(0, 0, 0, 0, 0, h); 
 popMatrix();
 }
}

Midterm Proposal Live Web:

For the midterm I would like to recreate the game Pictionary using  HTML5 canvas and processing.js, socket.io chat, and HTML5 video. I think it might be interesting to try and translate an existing one to this medium and asses what is lost/gained/changed. I know this is similar to Draw Something and iSketch, maybe i can figure out ways to make it better…once i figure out how to make it work initially.

The first user to log in, User[0], will be shown a word to draw which is randomly chosen from an array of possible words. User[0] will be the only one able to draw on the canvas. User[0] attempts to draw the person/place/or object given.

sketcher

All other users become players. Their video feed image is captured from webcam and displayed on the right column like I did here. These player can not see the word User[0] is trying to draw, nor can they draw on the canvas. The players enter their guesses in the form and once submitted their guess lists are displayed next to their image.

players

Once User[0] declares a winner (points are awarded?). The screen resets and User[0] becomes a player, and User[1] now becomes the artist. (Like position rotation in volleyball).

Challenges:

I have been playing more with video and chat with sockets in the past few weeks so I need do some research on what I can do with canvas to make a cool drawing tool. That being said, the default dirty way of drawing on canvas is kind of funny and childlike and makes the game harder.

Resetting the game board and rotating players while retaining user ID and points variable for each player.