Data Engineer Job at Merkle Science
- ️ About Merkle Science
Merkle Science provides blockchain transaction monitoring and intelligence solutions for web3 companies, digital asset service providers, financial institutions, law enforcement and government agencies to detect, investigate and prevent illicit use of cryptocurrencies. Our vision is to make cryptocurrencies safe and provide infrastructure for the safe and compliant growth of cryptocurrencies.
Merkle Science is headquartered in New York with offices in Singapore, Bangalore and London. The team has combined experience across Bank of America, Paypal, Luno, Thomson Reuters and Amazon. The company has raised over US$27 MM from SIG, Beco, Republic, DCG, Kenetic, GGV and several others.
Create and maintain optimal data pipeline architecture for our workload, This includes building highly resilient architecture for both streaming and batch ETL processes.- Have a good understanding of data structures used by public blockchains to store data.
- A significant portion of our data pipelines parse blockchain data and store them in our data warehouses
- Assemble large, complex data sets that meet functional / non-functional business requirements. This includes expanding the scope of our data-mining efforts by building data pipelines to crawl data from the darkweb, openweb, third party data sources.
- Collaborate with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization.
- Works closely with a team of frontend and backend engineers, product managers, and analysts to render data on to our products.
- Implement algorithms to transform raw data into useful information
- Build, Manage and Deploy AI / ML workflows.
- Lead technical architecture, design and best practices for the team
- Establish and follow operability as part of design principles
- Good knowledge of Site reliability concepts, such as RCA, postmortems and runbooks and related automation
- Strong DevSecOps principles
What are we looking for?·
>6+ years of relevant experience as Big Data Engineer.
- Experience in building data pipelines(ETL/ELT) using open-source tools such as Apache Airflow, Apache Beam and Spark.
- Experience in building realtime streaming pipelines using Kafka/Pubsub.
- Experience in building and maintaining OLAP and OLTP data warehouses.
- Good understanding of python, bash scripting and basic cloud platform skills (on GCP or AWS).
- A working knowledge of Docker is a plus.
- Problem-solving aptitude
- Analytical mind with a business acumen
- Excellent communication skills
❤️ Well Being, Compensation and Benefits
We care about your well-being. Along with excellent health insurance, we offer flexible time off, learning & development initiatives and hours that are designed to provide work/life balance. We regularly host team-building sessions and encourage discussions around mental health.
We reward talent and believe in acknowledging people for their contributions. We offer industry-leading compensation, along with generous equity. As a rapidly growing business, there are endless opportunities to grow your career with Merkle Science.
Please Note :
www.epokagency.com is the go-to platform for job seekers looking for the best job postings from around the web. With a focus on quality, the platform guarantees that all job postings are from reliable sources and are up-to-date. It also offers a variety of tools to help users find the perfect job for them, such as searching by location and filtering by industry. Furthermore, www.epokagency.com provides helpful resources like resume tips and career advice to give job seekers an edge in their search. With its commitment to quality and user-friendliness, Site.com is the ideal place to find your next job.