About MAKO:
Founded in 2013, Mako IT Lab is a global software development company with a strong presence across the USA, UK, India, and Nepal. Over the years, we’ve partnered with companies globally helping them solve complex challenges and build meaningful digital experiences.
What truly defines Mako is our culture. We believe in creating an environment where people feel empowered to take ownership, exercise freedom in their ideas, and contribute to solutions that genuinely make an impact. Learning is at the heart of who we are—our teams constantly grow through hands-on exposure, real-world problem solving, and continuous knowledge sharing across functions and geographies.
We don’t just build long-term partnerships with clients—we build long-term careers for our people. At Mako, you’ll be part of a collaborative, supportive, and fast-growing global team where curiosity is encouraged, initiative is celebrated, and every individual plays a meaningful role in shaping the company’s journey.
Job Summary:
We are looking for an experienced and detail-oriented Data Engineer to build, manage, and optimize data pipelines and datasets. The ideal candidate should have strong expertise in MS Excel, solid knowledge of Python, and hands-on experience in data cleaning, transformation, and handling large datasets. This role will support business and analytics teams by delivering reliable and high-quality data.
Key Responsibilities:
Design and maintain data pipelines and workflows.
Perform data cleaning, transformation, and validation of large datasets.
Create advanced reports using MS Excel (Pivot Tables, VLOOKUP, XLOOKUP, Power Query).
Write and maintain Python scripts for data automation and processing.
Optimize data storage and retrieval processes.
Ensure data accuracy, consistency, and integrity across multiple sources.
Collaborate with business analysts and stakeholders to understand data requirements.
Support ETL and data warehousing processes.
Required Skills:
Strong experience in MS Excel (advanced formulas, Pivot Tables, Power Query).
Strong knowledge of Python (Pandas, NumPy, data manipulation).
Hands-on experience with data handling, transformation, and cleaning.
Good knowledge of SQL (writing complex queries, joins, indexing).
Experience working with large datasets and structured/unstructured data.
Familiarity with ETL tools/processes.
Good to Have:
Experience with cloud platforms like AWS / Azure / GCP.
Familiarity with data visualization tools like Power BI, Tableau.
Knowledge of Apache Spark or similar big data tools.
Experience with version control tools like Git.