Home

Data Engineer with Python and Databricks ll Louisville, KY ll Onsite at Louisville, Kentucky, USA
Email: [email protected]
http://bit.ly/4ey8w48
https://jobs.nvoids.com/job_details.jsp?id=1975685&uid=

From:

Sona Chhabra,

SMART IT FRAME

[email protected]

Reply to:   [email protected]

Job Title: Data Engineer with Python and Databricks

Location: Louisville, KY

Job Description

Responsibilities:

Data Pipeline Development: Design, implement, and optimize data pipelines using Python, Databricks, and other modern data engineering tools.

Data Modeling: Work on designing and developing data models for both structured and unstructured data sources to support various business intelligence (BI) and machine learning (ML) applications.

ETL Process: Build and maintain scalable ETL (Extract, Transform, Load) processes in Databricks, integrating data from multiple sources such as APIs, databases, and cloud storage platforms.

Automation: Automate data workflows, ensuring data consistency, quality, and integrity across all systems.

Collaboration: Partner with data scientists and analysts to understand their data needs and provide actionable insights.

Performance Optimization: Identify and resolve performance bottlenecks in data pipelines and queries, ensuring efficient data processing at scale.

Monitoring & Troubleshooting: Implement robust monitoring solutions and resolve data-related issues to ensure the reliability of data processes.

Cloud Infrastructure: Leverage cloud platforms (AWS, Azure, GCP) to deploy and manage data engineering solutions.

Required Skills & Experience:

Programming: Proficiency in Python, with a strong understanding of Python libraries such as Pandas, NumPy, and PySpark.

Databricks: Hands-on experience with Databricks, including using Databricks notebooks, building and managing clusters, and working with Spark-based workflows.

Data Engineering: Solid understanding of data engineering concepts, including ETL processes, data warehousing, and building scalable data pipelines.

Cloud Platforms: Experience with cloud-based data platforms such as AWS, Azure, or GCP, particularly related to storage (e.g., S3, Blob Storage) and computing services.

SQL: Advanced SQL skills for querying relational and non-relational databases.

Data Warehousing: Familiarity with data warehousing solutions like Snowflake, Redshift, or BigQuery.

Version Control: Experience with Git for version control and collaborative development.

Problem-Solving: Strong analytical and problem-solving abilities with an attention to detail

Keywords: machine learning business intelligence sthree information technology Kentucky
Data Engineer with Python and Databricks ll Louisville, KY ll Onsite
[email protected]
http://bit.ly/4ey8w48
https://jobs.nvoids.com/job_details.jsp?id=1975685&uid=
[email protected]
View All
12:11 AM 03-Dec-24


To remove this job post send "job_kill 1975685" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]


Time Taken: 9

Location: Louisville, Kentucky