The Data Engineering team is responsible for the brains & blood of the platform spanning our Private Exchange, Data Science Factory, Asset Ingest, and Blockchain services. Engineers in this team will design & build modules, components, systems, and agent-based algorithms to fuel these critical services to life. They are full-stack developers who build scalable, diagnosable, data-driven, and evolvable components that drive our core execution workflows. This requires a unique engineering & data-science mindset that balances creativity, conceptual ability and practicality along with a strong sense of craftsmanship.
In 2018, has a roadmap to deliver Alpha & Beta products to early adopters in the asset management industry - this will not be a "9 to 5" opportunity. Candidates should be prepared for a wild but fun ride and bring their toolboxes filled with amazing tools.
* Design, code, instrument and maintain our data pipeline architecture.
* Assemble large, complex datasets that meet the functional & non-functional requirements of our core execution workflows & user interfaces.
* Build analytic tools that utilize the digital exhaust from our core execution pipeline to provide actionable insights to grow & optimize our business.
* Collaborate with product management, UX/UI and operational teams in producing high-quality software using common approaches and standards.
* Live and contribute to our engineering culture.
* Design and build advanced automated testing frameworks and tooling as necessary to improve development velocity & quality.
* Manage your own time, and work well both independently and as part of a team. Launch, iterate and make a difference.
SKILLS & QUALIFICATIONS
* Minimum Qualifications
-> Advanced working SQL knowledge and experience working with relational databases.
-> Experience designing & building 'big data' architectures, pipelines, and datasets.
-> Strong analytical skills related to working with unstructured data.
-> Successful history of manipulating, processing, and extracting value from large disconnected datasets.
-> Proven experience building and enhancing commercial SaaS applications.
-> Experience in cloud engineering, microservices, and CI/CD.
-> 3+ years of coding experience in one or more of the following languages: C/C++, Java, or Python.
-> BA/BS Degree in Computer Science or related technical field.
- Preferred Qualifications
-> Excellent coding skill in C/C++, Java or Python.
-> Experience with AWS data services (EC2, EMR, RDS, etc.).
-> Experience with stream-processing systems (Storm, Spark, etc.)
-> Experience with big data and machine learning frameworks (TensorFlow, Kinesis, etc.)
-> Experience with numerical programming frameworks such as Python, R, or MATLAB.
-> Ability to communicate to all levels of users (internal and external).
-> Be willing to work in Connecticut.