-
›
- Careers ›
- Careers in Europe ›
-
Project lead
Job Description
Project lead
Job Summary
-
Location: Sofia
-
Project role: Project lead
-
Skills: Azure Databricks
- Secondary Skills:
- Python
- SQL
- DevOps
-
No. of positions: 1
Job description:
Key Responsibilities:
1. Python/PySpark Expertise:
o Proficiency in Python/PySpark frameworks/libraries for data processing and analysis.
2. Azure Databricks (ADB) Expertise:
o Strong knowledge and experience working with Azure Databricks for data engineering tasks.
3. SQL Skills:
o Expertise in SQL for querying and manipulating data in relational databases.
4. ETL and Process Flow Design:
o Design and implement complex ETL (Extract, Transform, Load) processes and workflows to ensure efficient data movement and transformation.
5. Data Transformation Frameworks:
o Proficiency across various data transformation frameworks to handle diverse data processing requirements.
6. Understanding of Underlying Systems:
o Familiarity with the underlying systems including storage, processing, and visualization technologies to optimize data workflows.
7. Airflow Configuration and Orchestration:
o Configure and orchestrate data pipelines using Apache Airflow for workflow automation and scheduling.
Good to Have:
1. Azure Data Factory (ADF) Expertise:
o Experience with Azure Data Factory for building, orchestrating, and monitoring data pipelines.
2. Database Execution Plans:
o Ability to read and understand database execution plans to optimize query performance.
3. Azure/Azure DevOps Knowledge:
o Familiarity with Azure cloud services and Azure DevOps for managing and deploying data solutions.
4. Configuration Management, Continuous Integration, Continuous Deployment (CI/CD), and Cloud Implementations on Azure:
o Understanding of configuration management practices, CI/CD pipelines, and cloud implementations on the Azure platform.
5. Azure Certification:
o Azure certification is a plus, demonstrating proficiency in Azure cloud technologies and services.
Qualifications:
- Bachelor's degree in Computer Science, Information Technology, or related field.
- Proven experience as a Data Engineer or similar role.
- Strong proficiency in Python/pyspark, SQL, and Azure Data Factory.
- Experience designing and implementing ETL processes and data pipelines.
- Familiarity with data transformation frameworks and workflow orchestration tools like Apache Airflow.
- Understanding of database systems, cloud platforms, and CI/CD practices.
- Azure certification.
Why Join Us:
At HCL Technologies, we promise many varied opportunities for you to learn, along with the potential to do innovative, interesting work, and to fast track your career. You will have access to the Global clients and team from some of the most experienced leaders in the tech industry who will enable you to get on the fast track to a rewarding tech career. HCL Technologies offers the leadership, flexibility, and people-friendly work culture to enrich the life of an employee and of those around them - to find greater stability, purpose, and growth both personally and professionally.
We encourage our people to try things differently, provide them with the freedom to explore, opportunities to grown, and experiment while offering intensive training and on-the-job coaching opportunities. Our people take pride in their enthusiasm and commitment to go beyond. And we celebrate this collective pride. Below are few of the benefits and perks:
- Global careers and mobility
- A flexible (remote) working environment with work-life balance
- Great opportunities to make the role your own, upskill yourself and get involved with exciting projects
- Total Wellbeing is our focus. Alongside your professional excellence, you join the likeminded colleagues to create a larger impact within the company and society at large in your chosen area of passion - CSR Council, Diversity Council, Sparks – Engagement Champion to name a few