Requirements Bachelors Degree in Information Technology, Computer Science, or related field of study plus 6 years of professional experience in data integration and pipeline development, or related experience.br Required knowledge or experience withbr Databricks, DataDelta lake, Oracle, SQL Server or AWS Redshift type relational databases;br ETL Data extraction, data transformation and data load processes;br Athena;br AWS Cloud on data integration with Apache Spark, Glue, Kafka, Elastic Search, Lambda, S3, Redshift, RDS, and MongoDBDynamoDB ecosystems;br Python development including in pySpark in AWS Cloud environment;br Python and common python libraries;br Analytical experience with database in writing complex queries, query optimization, debugging, user defined functions, views, and indexes;br Source control systems such as Git and Jenkins build and continuous integration tools;br Development methodology and experience writing functional and technical design specifications;br Technical, development background in either data Services or Engineering;br Resolving complex data integration problems; andbr Working crossfunctionally.br br Employer will accept any suitable combination of education, training, or experience.

Categories: eb3

0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *