Nice to meet you! I'm often regarded as one of the geekiest Recruiters in the industry and I couldn't be more proud to be honest. I've loved computers since my Texas Instruments TI-99/4A in 1981, followed by a Laser 128, which was an Apple II clone manufactured by Vtech. I introduced Apple Logo (think similar to Lisp) to my Elementary School and helped teach after school classes on the Apple II.
Fast forward three-decades later, I'm still playing with code, deploying systems on AWS EC2/S3 and for the past 16+ years have built some of the most prestigious enterprise and consumer software companies in the world.
You can expect that I will never spam your resume anywhere. I'm incredibly technical and have a deep understanding the technology and trends that make our industry go around. I'm a wealth of opportunities that you wouldn't likely find elsewhere as the best roles are rarely advertised.
• Building v1 greenfield solutions that have established our company as the leading Data Warehouse as a Service, Advanced Analytics as a Service, and Data Lake as a Service provider in the world. We’ve assembled a leading team of technologists that leverage technologies such as Docker, Kubernetes, Mesosphere, AWS/OpenStack, HashiCorp tools (Vagrant, Packer), Ansible etc in one of the largest deployments in the world.
• We’re building the worlds most scalable database and analytics infrastructure in the world. Purpose built for both Public and Private Cloud deployments, and based upon a massively parallel architecture that has endless elasticity. In fact, our largest deployment sits within the https://www.supernap.com located in Las Vegas and has a footprint the equivalent of an entire football field.
• Truly innovative technology infrastructure company, building the most cutting-edge Big Data technology. Filed 397 patents in the last 9 years, which amounts to one new patent weekly. In the last 2 years we’ve replaced competitor infrastructure (Oracle, AWS, IBM and Netezza) within over 100 companies, and these are significant global deployments.
• In terms of scale, we have production deployments that are more than 50 petabytes in size for some of the largest Fortune 100 companies in the world. We have one such customer that generates more than 20 terabytes of data per hour. Here are some of our customers – Ebay, Wells Fargo, Ford, Cisco, Volvo, Boeing and Comcast:
- 19 of 20 telecommunications companies.
- 14 of 20 top global retailers, customers with more than 10 petabyte deployments.
- 18 of 20 top financial institutions.
- 06 of 06 top airline transportation companies.
• Incredibly stable and profitable with a global presence with more than 40 offices throughout the globe.
• Teradata is a publicly traded company with $2.18 billion in annual revenue, $250 million in profit and $1.08 billion cash in the bank. In fact, you can combine the revenue of Hortonworks ($217M), Cloudera ($309M) and Tableau ($869M) together and it still doesn't stack up to Teradata ($2.18B).
Interested? Please don’t hesitate to get in touch.
Does the prospect of solving the world’s toughest business problems with data and analytics excite you? Would you like to work with a team of the brightest analytical and engineering minds in the industry to understand and advance emerging technologies to invent the next wave of cutting-edge analytic data solutions?
We are building v1 greenfield solutions that have established our company as the leading Data Warehouse as a Service, Advanced Analytics as a Service, and Data Lake as a Service provider in the world. We’ve assembled a leading team of technologists that leverage technologies such as Docker, Kubernetes, Mesosphere, AWS/OpenStack, HashiCorp tools (Vagrant, Packer), Ansible etc in one of the largest deployments in the world.
•Architect & develop systems and processes to collect, transform, store and enable analysis of structured and unstructured data.
•Develop, test and deploy new data pipelines to support customer requirements; fix defects and implement enhancements in existing pipelines.
•Partner with IT, platform, application and support teams on support issues, process issues, bug fixes and delivery of enhancements to existing project and new projects.
•Develop and deploy ETL job workflow with reliable error/exception handling and rollback framework.
•Develop processes and techniques for practicing good data hygiene to ensure data is always up-to-date, accurate, and store efficiently.
• Expert at aggregating multiple data sources and creating complex SQL queries.
• BS with 8+ years of related experience or US Master’s with 6 years of experience.
• Agile (Scrum, Kanban, Lean) and Test Driven Development
• Building SOA based systems a must.
• Building and/or leveraging distributed systems.
• Big Data – Hadoop (HDFS, YARN) and Kafka experience a plus.
• RDBMS (MySQL, PosgreSQL, etc,) and SQL.
• RESTful API development.
• Public Cloud – AWS, Private Cloud – Openstack, VMWare.
• Building and leveraging CI/CD pipelines.
• Build tools – Gradle, Maven, make.
• Continuous Integration solutions such as Jenkins, TeamCity or Bamboo.
• CI/CD tools: Jenkins, Travis CI.
• Bachelor's degree in Computer Science or related field.