Nice to meet you! I'm often regarded as one of the geekiest Recruiters in the industry and I couldn't be more proud to be honest. I've loved computers since my Texas Instruments TI-99/4A in 1981, followed by a Laser 128, which was an Apple II clone manufactured by Vtech. I introduced Apple Logo (think similar to Lisp) to my Elementary School and helped teach after school classes on the Apple II.
Fast forward three-decades later, I'm still playing with code, deploying systems on AWS EC2/S3 and for the past 16+ years have built some of the most prestigious enterprise and consumer software companies in the world.
You can expect that I will never spam your resume anywhere. I'm incredibly technical and have a deep understanding the technology and trends that make our industry go around. I'm a wealth of opportunities that you wouldn't likely find elsewhere as the best roles are rarely advertised.
We are an industry-leading SaaS startup providing real-time analytics derived from complex querying of massive amounts of data. Our platform is turning 10s of billions of events and 10s of TBs of data per day into accessible visualization as they are received.
Forget Hadoop and Map/Reduce all by themselves, that's old school batch processing. We're all about real-time Big Data and Analytics. In fact, we've built our own distributed, in-memory data store (which we donated to the open-source community) that is capable of processing 150,000 events per second (billions per day), equating to about 500MB/s of data at peak (terabytes per hour) while still maintaining real-time, exploratory querying capabilities.
In addition to building really cool open-source data stores, we also work with MV*, D3.js, Node.js, BackboneJS, AngularJS, EmberJS on the UI, then Kafka, Storm and Scala on the platform/data ingestion-tier and then Ruby on the application-tier.
Interested? Then get in touch.
We are looking for talented engineers to join our backend team, contributing to a fluid set of distributed systems that power our petabyte-scale data platform. This spans work on our real-time data ingestion layer which combines Kafka, Storm and Hadoop; our Scala-based machine learning modules; and Druid, our open-source, distributed, in-memory data store.
We're an industry-leading SaaS startup providing real-time analytics derived from complex querying of massive amounts of data. Our platform is turning 10s of billions of events and 10s of TBs of data per day into accessible visualization as they are received.
** Apply academic results (such as the HyperLogLog algorithm) to real-world problems in our production environment.
** Focus on functional architectures that are easy to operate, manage, and extend.
** You should be open to continuous learning and mentorship by our senior engineers, and also make individual contributions that teach us something we dont already know.
** Java, the JVM, Hadoop and its friends (Pig, ZooKeeper).