DataOps Engineer Requirements:
- Bachelor's degree in Computer Science, Computer Engineering, or a related technical degree; four years related experience; or equivalent combination of education and experience
- 2+ years experience in data streaming technologies, such as Kafka
- 2+ years experience using ETL (Extract, Transform, and Load) concepts
- Experience with querying and designing databases using one or more of the following: MySQL, MS SQL, Oracle SQL, or other professional database system
- Ability to work in teams and collaborate with others to clarify requirements, quickly identify problems, and collaboratively find creative solutions
- Ability to assist in documenting requirements as well as resolve conflicts or ambiguities
Nice to Have Skills:
- 4 or more years experience in programming using one or more of the following: Java, C++, Perl, Python, or advanced Shell scripting.
- 3 or more years of experience in implementing data-driven solutions using tools such as Hadoop, Impala, Hive, NiFi, Athena, Redshift, BigTable, or Airflow.
- 2 or more years experience in machine learning and statistical modeling
- Experience in Cloud Native tools, such as Kubernetes and Docker
- Experience with using the R statistical computing language
- Experience with Agile at Scale, SAFe, and Lean Systems Engineering
DataOps Engineer Responsibilities:
- Develop high-volume, low-latency, data-driven solutions utilizing current and next generation technologies to meet evolving business needs
- Acquire big data input from numerous partners. Key technologies may include Python, Airflow, Prometheus, and Kafka.
- Normalize complicated data sources to convert potentially unusable data into a format that can be efficiently used by software and/or employees. Key technologies may include Spark, Kinesis, Lambda
- Build a CI/CD pipeline for our data software to ensure we keep quality high and time to market low. Key technologies may include Gitlab.
Department: Preferred Vendors
This is a contract position