Narrative is building the first global data marketplace. It has often been said that data is the new currency.
Unfortunately, maximizing the value of data is often easier said than done. On one side, transacting via individual
point to point integrations carries a lot of overhead in both business development and technical integration efforts.
On the other, going through big aggregators introduces opacity in the pricing and provenance of the data. At Narrative,
we help our customers get value from their data by building a central auction platform to reduce the friction and
tooling to increase the transparency in this process.
We are a small, early stage team looking for great developers who want to jump in and take major systems and user-facing
features from design to launch. Here’s where we are now:
- We are operating in Amazon Web Services. Our services are mainly deployed on EC2 provisioned with Terraform.
- We also heavily use other technology on AWS such as DynamoDB, S3, and RDS.
- Our backend includes a data ingestion web service with supporting Kinesis consumers, along with a growing array of
Spark projects. It’s written mostly in Scala, with a smattering of Python for lambda functions.
- We sit somewhere in the middle of the “Scala as a worse Haskell” and “Scala as a better Java” spectrum. We love
functional programming and we do make use of libraries like cats, but at the same time we heavily favor core language
features and have no intention of rewriting everything using Free Monads.
- Our web app UI is written in Typescript with Angular2 and a supporting API running on Node, and is deployed and
monitored using much the same supporting tech as the backend.
- Other services we use include: GitHub, CircleCI, DataDog.
Here are some examples of projects that we would like some help with:
- Data analytics pipelines to drive the interactive report UIs. For example: reports for yield forecasting and deal
- Implementing additional infrastructure to support transactions for more types of data.
- Improving the latency and resource usage of our transaction processes.
The ideal candidate should:
- Have experience in a typed functional language such as Scala or F#, or significant experience in their non-functional
equivalents (Java, C#) with an interest in Scala.
- Have experience working with non-trivial quantities of data. As of this writing, our ingestion pipelines are handling
something on the order of 500GB .snappy.parquet files per day. Prior work with Spark would be ideal, but experience
with similar MapReduce-based technologies would also be helpful.
- Have experience operating in a cloud environment like Amazon Web Services, Google Compute Engine, or similar.
- Be able to work across all aspects of back end systems, from application code to SQL to systems administration.
- Not be afraid of contributing to the entire stack (from the UI to Devops) when the need arises.
We are not looking for a 100% fit on all the technology buzzwords, but we are looking for someone with strong technical
skills who is eager to pick up new technologies as necessary.
We are building the team with a remote-first mindset, and as a result every team member is expected to have an ability
to synthesize business requirements, distill the domain, contribute to high-level design documents, efficiently
communicate asynchronously, and more generally work autonomously toward a shared vision.
Continuously investing in quality (code quality, tests, pull request reviews, refactoring…) is part of our strategy to
sustainably maximize the business value we deliver.
Apply at firstname.lastname@example.org