796 Database Developers jobs in the Philippines
Data Engineer/ Senior Data Engineer
Posted today
Job Viewed
Job Description
About Us
MoneyHero Group (Nasdaq: MNY) is a market leading financial products platform in Greater Southeast Asia, reaching 9.8m monthly unique users and working with more than 270 commercial partners across five markets including Singapore, Hong Kong S.A.R., Philippines, Taiwan, and Malaysia.
About The Job
The Senior Data Engineer is responsible for overseeing the design, development, implementation, and maintenance of all data-related systems, infrastructure, and processes within an organization. They will lead a team of data engineers and work closely with data scientists, business analysts, and other stakeholders to ensure that data is properly collected, processed, stored, and made available for analysis and decision-making purposes.
The Data Engineer Will
- Develop and implement technical data architecture and engineering strategies to support the organisation's data-driven initiatives, including data warehousing, ETL processes, and data governance.
- Build and maintain scalable and reliable data pipelines, ensuring data quality and consistency across all systems.
- Create and maintain data models that align with business requirements, ensuring data integrity and optimal performance
- Manage, guide and support junior data engineers in the development and implementation of data-related solutions
- Collaborate with data scientists, business analysts, and other stakeholders to identify data-related needs and requirements, and develop and implement solutions.
- Monitor and troubleshoot data-related issues, providing timely resolution and support to minimize downtime and disruption.
In This Role, We Are Looking For Someone With
- At least 4 years of hands-on experience in data engineering building scalable pipelines and infrastructure
- Create and maintain data models that align with business requirements, ensuring data integrity and optimal performance
- Strong technical skills in data warehousing, ETL processes, and database management systems, such as SQL, Python and dbt.
- Experience with cloud-based data technologies such as AWS, Azure, or Google Cloud ( BigQuery preferred)
- Knowledge of data security and privacy regulations, such as GDPR and HIPAA
- Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders
- Analytical and problem-solving skills, with the ability to identify and resolve data-related issues and implement effective solutions
- Bachelor's degree in computer science, engineering, or a related field
What can you expect from us?
Impact
: We are actively empowering and connecting people to a better financial future. Join us if you want to help us achieve our mission.
Work
: We have a team of over 350 talented individuals in 4 markets who are hyper passionate about building innovative financial solutions and making an impact on people's lives.
Culture
: We take our work seriously but don't hesitate to keep things light. We can only create magic when we have a little bit of fun.
Thrive
: We launched in 2014, and now help over 10 million monthly users make the best financial decisions. Accelerate your career and become a pioneer in your field with a leading fintech company that seeks to push the boundaries of your imagination and is committed to growing your career.
Reputation
: We are backed by world-class organizations and companies and have raised over US$110 million from investors including Experian, Pacific Century Group, IFC - a member of the World Bank Group
EEO Statement
MoneyHero Group is an equal opportunity employer. We value, support and respect all individuals and is committed to maintaining an inclusive and diverse working environment. Decisions in hiring are based on business needs, requirements of the job and individual qualifications and shall not be influenced by any consideration of race, ethnic or national origin, religion, sex (including gender identity and/or expression), age, sexual orientation, marital status, parental status, disability, genetic information, political affiliation or other applicable legally protected characteristics.
Data Engineer
Posted today
Job Viewed
Job Description
REQUIRED SKILLS AND QUALIFICATIONS KEY RESPONSIBILITIES
• Lead the design, development, and maintenance of robust data pipelines and workflows.
• Write and optimize complex SQL queries, stored procedures, and views for performance and scalability.
• Oversee and manage the operational aspects of data stores.
• Drive data optimization, performance tuning, and integrity checks.
• Manage cloud-based SQL server (azure managed instance) environments.
• Lead the migration of data and systems from SQL server to snowflake.
• Work with datalake environments and support data ingestion and transformation processes.
• Collaborate with cross-functional teams including data analysts, developers, and business stakeholders.
• Provide technical leadership, code reviews, and mentorship to junior team members.
QUALIFICATIONS
• Strong SQL expertise including stored procedures, views, and performance tuning.
• Experience in azure cloud, particularly with SQL server managed instances.
• Hands-on experience with snowflake and cloud-based data migration.
• Familiarity with datalake architectures and concepts.
• Experience with ssis and/or crystal reports is a plus.
• Strong problem-solving skills and attention to detail.
• Excellent communication and leadership skills.
Data Engineer
Posted today
Job Viewed
Job Description
Introduction
A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe.
You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat.
Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience.
Your Role And Responsibilities
As a Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs.
Your Primary Responsibilities Include
- Strategic Data Model Design and ETL Optimization: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements.
- Robust Data Infrastructure Management: Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization.
- Seamless Data Accessibility and Security Coordination: Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too.
Required Technical And Professional Expertise
- Experience in ETL, Snowflake, and or DBT.
- Experience in SQL, Unix scripting and or Shell scripting,
- Experience on databases like DB2, Netezza.
- Experience in Scheduling tools like Control-M.
Preferred Technical And Professional Experience
- Technical Development experience
- Demonstrated client interaction and excellent communication skills; both (written and verbal)
- Amenable to work on a client dictated schedule (Day, Mid and Night) and Location
Data Engineer
Posted today
Job Viewed
Job Description
The
Data Engineer
is responsible for the creation, maintenance, and continuous improvement of data pipelines. Part of his/her responsibilities are to implement best practices in data management practices (i.e., cleaning, validation, and transformation of data) make data into usable datasets that can easily be consumed by other teams. This role will also work closely with the software engineers, data analysts, data scientists and data governance to understand how the data behaves in its respective domain, to clarify business and technical requirements on different data use cases, and to design and create efficient and reliable data pipelines. Within Data Engineering, this person will learn and adopt best practices on data management, data architecture design, and DataOps principles. May it be in Central DE or Distributed DE, a Data engineer is crucial in creating value for downstream teams that use data.
About the Role
Key Responsibilities:
- Develop, maintain, and optimize data pipelines, data models, and data management solutions across data warehouses, data/delta lakes, or lakehouse environments.
- Collaborate with upstream teams (e.g., Mesh Teams) to integrate data sources and with downstream teams to ensure data usability and accessibility.
- Understand and adhere to existing technology standards and Data Engineering (DE) best practices.
Responsibilities
Central DE:
- Maintain and enhance the overall data architecture, ensuring scalability, high availability, and timely data ingestion.
- Build and optimize data pipelines for new data sources, applying DataOps principles to ensure seamless operations and minimal disruptions.
Distributed DE:
- Acquire and maintain deep domain knowledge of assigned data areas to inform data modeling and pipeline development.
- Design and develop data models for Zone 2 (silver layer) and Zone 3 (gold layer), ensuring business datasets are accurate, reliable, and ready for downstream consumption.
Qualifications
- Good working knowledge on Shell (e.g. bash, zsh) scripting
- Good working knowledge on data manipulation (SQL statements, JSON, NOSQL query, etc.)
- Good working knowledge on AWS services (EC2, S3, Glue Crawlers, Jobs, Batch, Athena, Lambda, etc.) or equivalent cloud offerings a big plus
- Good working knowledge on Apache Spark using SQL/Python
- Good understanding of the concepts of Datawarehouse, Data Lake/Delta Lake and/or Lakehouse
- Ability to work with other Leads to foster a culture of collaboration and teamwork
Required Skills
Central DE:
- Good knowledge on Linux/Unix Administration
- CI/CD experience using Terraform a big plus
Distributed DE:
- Good working knowledge on data modeling
Data Engineer
Posted today
Job Viewed
Job Description
Why Nasdaq
When you work at Nasdaq, you're working for more open and transparent markets so that more people can access opportunities. Connections can be made, jobs can be created, and communities can thrive. We want all our employees to have access to opportunity, too. That means planning for career growth, ensuring you have the tools you need, and promoting an inclusive culture where we're all valued for our unique perspective.
Here, you will work for a global tech leader committed to breaking down barriers to inclusive prosperity. We see technology as a means to free people up to work together more productively and effectively by centralizing data, analytics, and market intelligence.
Here, we're committed to building a more diverse and inclusive workforce. Not only is it our responsibility to do better, but we also need representative voices to power the fresh thinking that is vital for our business and our clients.
What We Offer
This is a permanent full-time role based in Bonifacio Global City, Taguig following a hybrid work model setup (at least 2 in office days per week).
You can expect an autonomous but fast-paced work environment where you are recognized for your results and ability to drive things forward. Every day brings many opportunities to learn & grow and rewards with a global impact we create.
In return, you will receive HMO coverage for you and your dependents, employee stock purchase plan, equity grant, retirement plan, annual bonus, free counseling sessions, subscription to e-learning platforms, fitness, wellness and more.
What You Will Do
The Data Operations and Engineering (DOPE) division of Nasdaq Data Link is responsible for the datasets that our customers buy and consume every day. We are seeking a
Data Operations Analyst
to join our growing team.
As part of the Data Operations Team, you will contribute by monitoring and maintaining hundreds of datasets on Nasdaq Data Link. This includes monitoring various tools to ensure dataset timeliness and accuracy, communicating with stakeholders, prioritizing and resolving data issues, and handling various maintenance tasks related to data products. Specifically, you will:
- Continuously monitor a shared email inbox, various Slack channels and PagerDuty to respond to alerts related to data delays.
- Troubleshoot alerts and determine the appropriate course of action.
- Communicate with partners, Customer Success and other stakeholders to keep them aware of ongoing issues.
- Assist the various support teams to resolve outstanding queries from customers and partners.
- Develop your programming skills in Python, Spark and SQL.
- Learn to use new tools such as Databricks and Monte Carlo
What We Expect
- Have 1-3 years of professional experience in understanding data products and data management. Recent graduates are welcome to apply.
- Amenable to work on a rotational shift schedule and accommodate weekend work as the team operates 24/7 depending on business needs.
- Have a Bachelor of Computer Science/Engineering or equivalent degree or have equivalent qualifications/experience.
- Able to communicate professionally with internal and external stakeholders.
- Be eager to learn new technologies and best practices in data engineering by working with a world-class data team.
- Have a basic knowledge of Python/SQL/Git and be willing to further develop these skills
- Be eager to learn about parallel processing frameworks, orchestration/scheduling tools, and distributed data systems such as Databricks and Airflow.
- Be entrepreneurial and enthusiastic about working in a fast-paced environment.
What Would Be Helpful
- Familiarity with Databricks and Airflow
Does It Sound Like You?
Please follow through by clicking the "Apply" link and submitting your application. If your skills and experience are a match, we will be in touch soon. In the meantime, please visit our website and social media channels to learn more about our innovative business, inclusive culture and discover why Nasdaq Manila is Great Place To Work certified
Come as You Are
Nasdaq is an equal opportunity employer. We positively encourage applications from suitably qualified and eligible candidates regardless of age, color, disability, national origin, ancestry, race, religion, gender, sexual orientation, gender identity and/or expression, veteran status, genetic information, or any other status protected by applicable law.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Data Engineer
Posted today
Job Viewed
Job Description
- Design and build robust, scalable, and efficient data pipelines using Spark (PySpark, Scala, SQL, etc.).
- Manage and optimize ETL/ELT processes, including data ingestion, transformation, and integration from multiple sources into data lakes and warehouses.
- Utilize Azure services such as Azure Data Factory, Azure Databricks, Azure Synapse, Azure Data Lake, and Azure SQL Database for data engineering tasks.
- Ensure data quality, reliability, security, and performance across all pipelines and systems.
- Collaborate with data analysts, BI developers, and stakeholders to understand data needs and deliver accurate solutions.
- Perform data profiling, validation, and cleansing to maintain integrity and consistency.
- Monitor, troubleshoot, and optimize existing pipelines and workflows.
- Document processes, data flows, and technical solutions.
Qualifications & Skills:
- Bachelor's degree in Computer Science, Information Technology, or a related field.
- 3+ years of experience in data engineering or a similar role.
- Strong hands-on experience with Azure Data Factory, Databricks, Synapse, Data Lake, and SQL Database.
- Proficiency in ETL/ELT design and implementation.
- Expertise in Spark (PySpark, Scala, SQL) for large-scale data processing.
- Strong knowledge of SQL (querying, optimization, stored procedures).
- Familiarity with Python for scripting and automation.
- Understanding of data modeling, warehousing, and schema design.
- Knowledge of data governance, security, and compliance best practices.
- Strong analytical, problem-solving, and collaboration skills.
Additional Requirements:
- Open to Project based employment (12 months w/ possibility of absorption depends on your performance)
Data Engineer
Posted today
Job Viewed
Job Description
At SiteMinder we believe the individual contributions of our employees are what drive our success. That's why we hire and encourage diverse teams that include and respect a variety of voices, identities, backgrounds, experiences and perspectives. Our diverse and inclusive culture enables our employees to bring their unique selves to work and be proud of doing so. It's in our differences that we will keep revolutionising the way for our customers. We are better together
What We Do…
We're people who love technology but know that hoteliers just want things to be simple. So since 2006 we've been constantly innovating our world-leading hotel commerce platform to help accommodation owners find and book more guests online - quickly and simply.
We've helped everyone from boutique hotels to big chains, enabling travellers to book igloos, cabins, castles, holiday parks, campsites, pubs, resorts, Airbnbs, and everything in between.
And today, we're the world's leading open hotel commerce platform, supporting 47,000 hotels in 150 countries - with over 125 million reservations processed by SiteMinder's technology every year.
About The Data Engineer Role.
The Enterprise Data Management and Analytics (EDMA) team at SiteMinder is the central BI and Analytics team catering to all analytical needs across the company. This profile would play an important role as part of the core data engineering and Ops team which is part of the EDMA.
As a Data Engineer, you will be part of the core data engineering team responsible for the entire BI infrastructure
What You'll Do.
- Build core data architecture components from scratch as part of enterprise data architecture enhancements
- Analyze and optimize the existing Data architecture using best practices
- Identify, analyze and integrate various data source systems being used across organization
- Ensure existing data architecture works optimally and undertake regular maintenance tasks
What You Have.
- Vast experience in a Data Engineer role.
- Should have experience in developing Enterprise data architecture using latest Big Data technologies preferably Open Source
- Experienced in AWS services (S3, Glue, Lambda, SQS, SNS, Eventbridge, Athena, AppFlow, etc.) and familiarity with cloud-native architecture.
- Should have experience of designing complex workflows on workflow management tools like Airflow, AWS Data pipeline, etc.
- Strong experience in Terraform for infrastructure automation and cloud management.
- Experience in building scalable ETL pipelines, transforming and integrating data from various sources like SaaS into centralized storage solutions with tools like FiveTran, AWS AppFlow, Xplenty and etc.
- It would be great to have experience of developing customized integration layer in Python through REST and SOAP API's architectures
- Must have strong technical abilities to understand, design, write and debug complex code in Python, Spark and SQL is a must
- Experience with DBT to manage data transformation workflows within the Data Lakehouse environment, ensuring streamlined data pipelines.
- Experience with CI/CD tools like Buildkite and Jenkins.
- Understanding of data modeling, data governance and security best practices in cloud environments.
- Strong analytical and troubleshooting skills to identify bottlenecks, optimize data flows, and ensure system scalability and performance.
- Implement MLOps solutions that deploy machine learning models into production efficiently. Oversee model versioning, monitoring and scaling in a production environment is a plus
Our Perks & Benefits…
- Mental health and well-being initiatives
- Generous parental (including secondary) leave policy
- Paid birthday, study and volunteering leave every year
- Sponsored social clubs, team events, and celebrations
- Employee Resource Groups (ERG) to help you connect and get involved
- Investment in your personal growth offering training for your advancement
Does this job sound like you? If yes, we'd love for you to be part of our team Please send a copy of your resume and our Talent Acquisition team will be in touch.
When you apply, please tell us the pronouns you use and any adjustments you may need during the interview process. We encourage people from underrepresented groups to apply.
Be The First To Know
About the latest Database developers Jobs in Philippines !
Data Engineer
Posted today
Job Viewed
Job Description
Job Title: Data Engineer
Location: Makati
Work Setup: Hybrid (3x RTO 2x WFH per week)
Employment Type: Full-Time
Position Summary
We are seeking a skilled
Data Engineer
to design, build, and maintain scalable data pipelines and warehouse solutions. This role will focus on optimizing data workflows, ensuring data integrity, and implementing robust governance practices to support analytics and business intelligence. The ideal candidate will have expertise in cloud platforms, ETL development, and infrastructure automation.
Key Responsibilities
Data Warehouse Architecture & Design
- Design and manage
data warehouse architecture
, ensuring scalability, performance, and alignment with business needs. - Develop
optimized data models
, schemas, and indexing strategies for efficient querying and large dataset handling. - Troubleshoot performance bottlenecks in data pipelines and warehouse systems.
ETL Development & Data Integration
- Build and optimize
ETL/ELT pipelines
to ingest and transform data from diverse sources (APIs, databases, flat files). - Implement
CI/CD practices
for automated testing and deployment of data workflows. - Monitor data pipelines for anomalies and enforce
data quality checks
(validation, audits).
Data Governance & Security
- Enforce
role-based access control (RBAC)
, encryption, and data masking to comply with privacy regulations (e.g., GDPR, HIPAA). - Classify data by sensitivity and document
data lineage
for traceability.
Collaboration & Support
- Partner with
analysts, BI teams, and data scientists
to deliver high-quality datasets for reporting. - Support
complex data analysis
to uncover trends and strategic insights.
Qualifications
Technical Skills
- Proficiency in
SQL, Python, or Scala
for data processing. - Experience with
cloud platforms
(AWS, GCP, Azure) and tools like
Snowflake, Redshift, or BigQuery
. - Knowledge of
ETL frameworks
(Apache Airflow, dbt) and
IaC
(Terraform, Kubernetes).
Soft Skills
- Strong problem-solving and
performance-tuning
abilities. - Excellent communication to bridge technical and business teams.
Preferred Experience
- 3+ years in
data engineering
, with exposure to
governance and warehousing
. - Certifications in
cloud data technologies
or
data governance
.
Why Join Us?
- Opportunity to shape
enterprise-grade data infrastructure
. - Collaborative, innovative environment with cutting-edge tools.
- Competitive compensation and career growth.
Data Engineer
Posted today
Job Viewed
Job Description
Company Description
We suggest you enter details here.
Role Description
This is a full-time remote role for a Data Engineer at Encora Philippines. The Data Engineer will be responsible for designing, implementing, and managing data infrastructure and pipelines. Key tasks include working on data modeling, building and optimizing ETL processes, developing and maintaining data warehouses, and performing data analytics. Daily duties will involve collaborating closely with data scientists, analysts, and other stakeholders to ensure data quality and integrity.
Qualifications
- Skills in Data Engineering and Data Modeling
- Experience in Extract Transform Load (ETL) processes
- Proficiency in Data Warehousing
- Data Analytics expertise
- Strong problem-solving and analytical skills
- Ability to work independently and remotely
- Bachelor's degree in Computer Science, Information Technology, or a related field
- Experience with cloud data platforms is a plus
Data Engineer
Posted today
Job Viewed
Job Description
SPAC Information Technology Inc. is hiring a Casual/Temporary Data Engineer role in Ortigas Center, NCR. Apply now to be part of our team.
Job summary:
- Looking for candidates available to work:
- Monday: Afternoon, Morning
- Tuesday: Afternoon, Morning
- Wednesday: Afternoon, Morning
- Thursday: Afternoon, Morning
- Friday: Afternoon, Morning
- Saturday: Afternoon, Morning
- Sunday: Afternoon, Morning
About the job We're looking for passionate and experienced Data Engineering Specialists who thrive on solving complex problems and unlocking the full potential of data for our customers. If you have a strong technical background in AI, machine learning, and cloud-based analytics platforms, this is an exciting opportunity to drive meaningful change in organizations through data-driven strategies.
In this role, you'll partner closely with clients to design and implement cutting-edge AI and analytics solutions tailored to their unique needs. Your work will empower them to streamline business processes, make smarter decisions, and discover valuable insights hidden within their data. KEY RESPONSIBILITIES
- Customer Engagement: Collaborate closely with customer business and technical
teams to understand their data needs and requirements, translating complex data challenges into effective pipeline solutions that drive successful outcomes.
- Analytics Platform Solutions Design: Design and implement end-to-end data
platforms for AI and analytics, incorporating flexible data pipelines and appropriate data architectures for efficient data storage and access while incorporating robust security measures to protect sensitive information.
- Data Pipeline Development: Design and implement scalable data pipelines to
ingest, process, and transform data from various sources into accessible formats for analysis, ensuring optimization for performance and reliability.
- Data Platform Implementation: Oversee the implementation of cloud-based data
platforms (e.g., Snowflake, AWS Redshift, Google BigQuery) that enable seamless data storage, processing, and access while ensuring alignment with customer objectives.
- Cross-Functional Team Collaboration: Collaborate with various teams to
understand their data requirements and provide the necessary infrastructure, tools, and support to enable successful analytics and AI initiatives.
- Documentation and Reporting: Maintain comprehensive documentation of data
pipeline architectures, processes, and workflows. Regularly update stakeholders on project status, performance metrics, and challenges encountered during development.
- Continuous Improvement: Stay up to date with the latest trends in data
engineering and cloud technologies. Identify areas for improvement in data pipelines and platforms, proposing innovative solutions to enhance performance and efficiency. QUALIFICATIONS
- Education: Bachelor's degree in Computer Science, Information Technology, or
a related field.
- Experience: Over 5 years of experience in data engineering, data pipeline
development, and cloud data platform implementation, with extensive experience in environments such as AWS, Azure, or Google Cloud.
- Technical Expertise: Proficient in data pipeline tools and technologies
(e.g., Apache Kafka, Apache Airflow, AWS Glue) and cloud-based data platforms (e.g., Snowflake, AWS Redshift, Google BigQuery).
- Analytical and
Problem-Solving Skills: Strong ability to analyze data workflows and troubleshoot issues related to data ingestion, transformation, and integration.
- Communication: Excellent communication skills, with the ability to translate
technical concepts into business value and work effectively with cross-functional teams.
- Self-Starter and Fast Learner: Ability to proactively lead tasks and projects
with minimal supervision and adapt quickly to new technologies and evolving requirements