Home

Python Hadoop Data Engineer at Remote, Remote, USA
Email: [email protected]
http://bit.ly/4ey8w48
https://jobs.nvoids.com/job_details.jsp?id=614078&uid=

Job
Title: Python Hadoop Data Engineer

Long Term Contract

Job Description:

Key Responsibilities:

1. Data Pipeline Development: Design, develop, and maintain data pipelines
using Python and Hadoop ecosystem technologies such as HDFS, Hive, Spark, and
Kafka.

2. Data Ingestion: Implement data ingestion processes to acquire data from
various sources, both batch and real-time, ensuring data is collected
efficiently and reliably.

3. Data Transformation: Perform data transformation, cleansing, and enrichment
tasks to prepare data for analysis and reporting purposes.

4. Data Quality Assurance: Implement data quality checks and monitoring
mechanisms to ensure data accuracy, completeness, and consistency.

5. Performance Optimization: Optimize data processing and storage to ensure
efficient and scalable data solutions, considering factors such as partitioning
and indexing.

6. Security and Compliance: Ensure data security and compliance with relevant
data privacy regulations, implementing necessary access controls and
encryption.

7. Documentation: Maintain comprehensive documentation for data pipelines,
processes, and best practices for the team.

8. Collaboration: Collaborate with data scientists, analysts, and other
cross-functional teams to understand data requirements and deliver solutions
that meet business needs.

9. Monitoring and Troubleshooting: Implement monitoring and alerting solutions
to proactively identify and address data pipeline issues.

10. Continuous Improvement: Stay up-to-date with industry trends and best
practices in data engineering, suggesting improvements to existing processes
and technologies.

Qualifications:

Bachelor's or Master's degree in Computer
Science, Information Technology, or a related field.

Proven experience as a data engineer, with
a focus on Hadoop and Python technologies.

Strong programming skills in Python and
experience with Hadoop ecosystem tools such as HDFS, Hive, Spark, and
Kafka.

Proficiency in SQL and database management
systems.

Knowledge of data modelling and ETL
(Extract, Transform, Load) processes.

Familiarity with data warehousing concepts
and cloud platforms (e.g., AWS, Azure, GCP) is a plus.

Excellent problem-solving skills and
attention to detail.

Strong communication and teamwork skills.

Ability to work independently and meet project
deadlines

Thanks & Regards,

Maddula Venkateshwara Reddy | Vuesol Technologies Inc.

Senior. US IT RECRUITER

Email: 
 [email protected]

--

Keywords: information technology
http://bit.ly/4ey8w48
https://jobs.nvoids.com/job_details.jsp?id=614078&uid=
[email protected]
View All
11:58 PM 07-Sep-23


To remove this job post send "job_kill 614078" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]


Time Taken: 0

Location: ,