I haven’t been able to work much for the past week due to illness and other responsibilities, but I managed to put in a nice 6hr chunk of time in yesterday and today. As a result, I’ve hooked up live Reddit data from r/news, established basic story retrieval with the Node API, modified the front-end to accept the data, and created a rudimentary setup for search with elastic search. A nice uninterrupted 10p-4am can be so much better than several short bursts in the daytime sometimes!

I estimate this chunk completed maybe 10% of the front-end and 20% of the web server. I’ll do the alpha ETA calculation later. Here are some shots with live datamelona_live_1

The new story detail page with populated data. I need to clean up the keyword extraction and images.


The front page. The reddit data mining runs periodically so there can be live updates to the site. (server push coming soon)



The elastic search powering the search page was remarkably easier than my previous approach — building from scratch with Lucene.


Backend Prep

I’ve started building the Node back-end and already I’m beginning to see why others warn against the “callback hell.” I’ve picked up bluebird for Promises, which will helpfully reduce the callback difficulty. Mocha and Chai js are looking good as options for testing the REST API once I have the requirements stabilized, and may offer me more opportunity for test-driven development.

I’ve currently worked 19hrs on this project, with an estimated 21% completion of the front-end. That sums for a 90hr front-end build time with 71hrs to go (considering the 56hr estimate from the previous post). I’ve been working on other projects, so I haven’t been able to put enough time to complete my 2-week stretch deadline.

However, I’ve set up the Express and Flask API’s and am currently using them to retrieve placeholder data for the story pages. Here’s what the story page looks like so far:


I’m using Newspaper to generate all of the information so far, with its built-in NLP capabilities to extract keywords and summarize the text. There was never really much of a design to begin with so I’m thinking about how to display the statistics and sentiment analysis information when I get to it.

The summary also won’t be a huge block of text in the final version, but a collection of helpful snippets from multiple news sources. On the right sidebar I’ve added an area for “Related,” which may imply some content recommendation in the future. I have no click data, so recommendation will most likely be NLP-similary based (or index news sites’ recommendations maybe?).

Basic Structure and Styling

I’m trying out a different approach with this project. Most of the time I build the back-end first, with functional API’s and real working data before beginning the UI. As a result, changes in the requirements of the UI sometimes make it necessary to modify the back-end for overlooked features. As a result, I want to try to build a semi-static UI with dummy data, then see what data requirements can minimally fulfill the UI.

I’ve been a little busy these past few days with other projects, so I’ve only been able to log 4.5 hours so far. I’ve marked an approximate 8% completion for the minimal front-end code so far, which gives me 56.25hr remaining for the front-end. This estimate will become more accurate over time, but seeing that this placeholder UI is essentially the wireframe and requirements list for the project, these numbers might be useless for now!

Here’s how it looks so far:


I’m using Unsplash for the placeholder images, which is nice since they’re random (refreshing!). It looks like Product Hunt, well, because my main design inspiration for now is Product Hunt. I’ll worry about branding and individuality later after I get to the fun NLP parts of the project.

I’ve also set up a basic Express server, which I’ll populate with placeholder data as the outputs. That way I can populate the placeholders with the REST API and have a halfway functional front-end by the end of this initial phase.

I thought 3D scanning would be extremely painful to do without fancy equipment or complicated setups. However, Autodesk has a free (at least free for students) service called 123D Catch which allows you to upload pictures and have them processed as a 3D model on the cloud. Simply amazing. I just learned of this software today, and had a low quality test with my quadrotor on my desk:

Considering it was done with just 20 unfocused, low resolution photos that didn’t cover the  entire span of the model, this 3D scan is amazing. I wish I had access to the source code.

Many applications for this. It’s a 3D printer’s dream to be able to record something on the field and have a replica of it printed in a few hours without ever having to touch or measure the reference object. That could be done by scouting with a quadrotor or drone, or just by hand or in a studio. The drone 3D model capture idea sounds great though. I’ll try to take some aerial photos and try to make a 3D model out of that later with the big quad. Another cool idea is taking underwater video and stitching that. More things to come.