Job Responsibilities: Data Engineer
Salary: $20-30/Hour
Company: Costco
Location: USA
Educational Requirements: Bachelor's Degree
Full Job Description:
It's an environment unlike anything else in the high-tech world, and the secret to Costco's success is its culture. The cost of Costco and its employees has been well documented in reports from various publishers, including Bloomberg and Forbes. Our staff and members come first. Costco is well known for its generosity and community service and has received numerous awards for its charitable work. The company joins its employees in participating in volunteer work by providing many opportunities to help others.
Come join the Costco Family IT Wholesale. Costco IT is a dynamic, fast-paced environment driven by exciting transformation efforts. We're building a next-generation store where you'll be surrounded by dedicated employees and top professionals. The Data Engineer is responsible for developing data pipelines and/or data integration of test data for Costco companies used for business-critical data usage (ie, reporting, data science/machine learning, API data, etc. ). At Costco, we're in the business of getting valuable data to deliver better products and services to our members. This project focuses on data engineering to create and deliver an automated data pipeline that will deliver the stored data for all testing purposes. The data engineer will work with product owners, data architects, and data platform teams to design, build, test, and maintain a data pipeline that is used across the enterprise as a single source of truth. If you want to be part of one of the best companies in the world to "work for", just apply to get your job back.
WORK
● Developed and implemented data pipelines to import data into the various test environments used for the development of our supported datasets.
● Work in tandem with data designers, data managers and data quality engineers to design data pipelines and recommend improvements to ongoing data storage, data integration, data quality and organizational structure.
● Knowing the methods and tools to hide PII and other sensitive data.
● Design, develop and implement ETL / ELT / CDC processes using Informatica Intelligent Cloud Services (IICS), Azure Data Factory, AWS Glue and other ETL products.
● Uses Azure services such as Azure SQL DW (Synapse), ADLS, Azure Event Hub, Cosmos, Databrick, Delta-Lake to improve and accelerate the delivery of our data products and services. ● Implement Big Data and NoSQL solutions by creating a scalable data processing system to generate valuable information for the organization.
● Identified, designed and implemented internal improvement processes: automation of manual processes, optimization of test data transmission.
● Identifies ways to improve data reliability, efficiency and quality of data management.
● Communicates with non-technical people in both written and verbal form.
● Conduct peer reviews for other data engineer jobs.
Wanted
● 5+ years of experience engineering and implementing data pipelines with large and complex datasets.
● 2+ years of hands-on experience with Informatica IICS, Azure Data Factory, AWS Glue or other ETL tools.
● 3+ years of experience working with cloud technologies such as ADLS, Azure Databricks, Spark, Azure Synapse, Cosmos DB and other Big Data technologies.
● Extensive experience working with various data sources (DB2, SQL, Oracle, flat files (csv, delimited), API, XML, JSON.
● Experience in the implementation of data integration systems such as event / information based input (Kafka, Azure Event Hub), ETL.
● Advanced knowledge in SQL; strong understanding of relational databases and enterprise data; ability to write complex SQL queries against various data sources.
● More than 5 years of experience in Data Pipeline, ETL and Data Warehousing.
● Good understanding of database security concepts (data pools, relational databases, NoSQL, Graph, database).
● Git/Azure DevOps experience.
