Type Here to Get Search Results !

Honeywell Hiring Data Engineers | Up to 10 LPA | Apply Now!

Honeywell Data Engineer Job Post

Honeywell is Hiring for the Role of Data Engineer

Company Name: Honeywell

Post Name: Data Engineer

Qualification: Bachelor’s Degree in Computer Science, Software Engineering, or a related discipline

Branch: Any Branches

Batch: Recent Batches

Salary: Up to 10 LPA

Experience: Fresher / Experience

Job Location: Work From Home (WFH)

Last Date: ASAP

Roles and Responsibilities:

  • Design and implement strong data engineering principles to bring data from relevant source systems, transform and build the target models in Data Consumption engagements.
  • Build data models to adapt, robust, scalable, secure data architecture and modern data engineering platform, tools, and technologies, implementing industry best practices.
  • Develop data pipelines leveraging standards, guidelines, and data governance rules.
  • Partner with the IT support team on production processes, continuous improvement, and production deployments.
  • Think creatively to find optimal solutions to complex problems.
  • Ensure that technical solutions are scalable, maintainable, and adhere to industry best practices and standards.
  • Collaborate with cross-functional teams, including product owners, designers, and other stakeholders, to ensure the successful delivery of projects.
  • Conduct code reviews and ensure that code quality is maintained throughout the development process.
  • Contribute to the standards/frameworks and tools to improve the scalability of the EDW.
  • Troubleshoot and resolve technical issues that arise during the development process.

Qualifications:

  • 3-6 years of relevant experience with a bachelor’s degree in computer science, software engineering, or a related discipline.
  • Strong advanced SQL skills: Proficient in writing complex queries, joining large datasets, and optimizing queries for performance.
  • ETL experience: Designing and building ETL pipelines to extract data from various sources and expose the pipeline to various data consumption platforms like Power BI, Tableau, Data Bricks, App building platforms like Mendix, Appian, etc.
  • Exposure to relational and non-relational databases like SQL, NoSQL, PostgreSQL, and Big Data systems like Spark, Hadoop, etc.
  • Programming skills: Proficient in at least one programming language, such as Python, Java, C++, Scala, or Ruby.
  • Experience working with Big Data using tools like Hadoop, Spark, and Kafka.
  • Cloud experience: Experience working with cloud platforms like AWS or Azure and understanding how to optimize Snowflake’s performance within a cloud environment.

How To Apply:

  1. Click on the “Apply Here” button provided below. You will be redirected to the company's official career page.
  2. Click on “Apply Online”.
  3. If you have not registered before, create an account.
  4. After registration, log in and fill in the application form with all the necessary details.
  5. Submit all relevant documents, if requested (e.g., resume, mark sheet, ID proof).
  6. Provide accurate information in your application.
  7. Verify that all the details entered are correct.
  8. Submit the application form after verification.

Post a Comment

0 Comments