Thank you for subscribing to your personalised job alerts.

4 jobs found for Cloud

filter5
  • specialism2
    working in
    show 4 jobs
    clear filter
  • job types
    job types
    show 4 jobs
    clear filter
  • salary
    salary
    S$
    show 4 jobs
    clear filter
clear all
    • permanent
    • S$80,000 - S$110,000 per year
    • full-time
    About the role. As a Data Engineer you will be responsible for developing, industrialising, and optimising big data platform running on GCP. You will ingest new data sources, write data pipelines as code, and transform, enrich and publish data using the most efficient methods.Working with data from across global data infrastructure, you will understand the best way to serve up data at scale to a global audience of analysts. You will work closely with data architects, data scientists and data product managers on the team to ensure that we are building an integrated, performant solutions.You will have a Software Engineering mind-set, be able to leverage CI/CD and apply critical thinking to the work you undertake. The role would suit candidates looking to make the move from working with traditional big data stacks such as Spark and Hadoop to using cloud native technologies (DataFlow, Big Query, Docker/Kubernetes, Pub/Sub, Redshift, Cloud Functions).Candidates who also have strong software development skills and wishing to make the leap to working with Data at scale will also be considered. Responsibilities include:Designing and building end to end Data Engineering solutions on the Google Cloud Platform.Ensuring the platform is secure, compliant and efficient.Being a proactive member of DevOps / Agile scrum driven team; always looking for ways to tune and optimise all aspects of work delivered on the platform.Aligning work to both core development standards and architectural principles. Skills and experience required Strong programming skills in languages such as Python/Java/Scala including building, testing and releasing code into productionStrong SQL skills and experience working with relational/columnar databases (e.g. SQL Server, Postgres, Oracle, Presto, Hive, etc…)Knowledge of data modelling techniques and integration patternsPractical experience writing data analytic pipelinesExperience integrating/interfacing with REST APIs / Web ServicesExperience handling data securelyExperience with DevOps software delivery and CI/CD processesA willingness to learn and find solutions to complex problems Desirable:Experience migrating from on-premise data stores to cloud solutionsExperience of designing and building real/near real time solutions using streaming technologies (e.g. Dataflow/Apache Beam, Fink, Spark Streaming etc)Hands-on experience with cloud environments (GCP & AWS preferred)Building API's and apps using Python/JavaScript or an alternative languagePractical experience with traditional Big Data stacks (e.g Spark, Flink, Hbase, Flume, Impala, Hive etc)Experience with non-relational database solutions (e.g. Big Query, Big Table, MongoDB, Dynamo, HBase, Elasticsearch)Experience with AWS data pipeline, Azure data factory or Google Cloud DataflowWorking with containerization technologies (Docker, Kubernetes etc…)Experience working with data warehouse solutions including extracting and processing data using a variety of programming languages, tools and techniques (e.g. SSIS, Azure Data Factory, T-SQL, PL-SQL, Talend, Matillion, Nifi, AWS Data Pipelines)Exposure to visualisation technologies such as Looker and TableauKnowledge of and experience in automation technologies To apply online please use the 'apply' function, alternatively you may contact Chloe Chen at chloe.chen(@)randstad.com.sg. (EA: 94C3609 /R1768253)
    About the role. As a Data Engineer you will be responsible for developing, industrialising, and optimising big data platform running on GCP. You will ingest new data sources, write data pipelines as code, and transform, enrich and publish data using the most efficient methods.Working with data from across global data infrastructure, you will understand the best way to serve up data at scale to a global audience of analysts. You will work closely with data architects, data scientists and data product managers on the team to ensure that we are building an integrated, performant solutions.You will have a Software Engineering mind-set, be able to leverage CI/CD and apply critical thinking to the work you undertake. The role would suit candidates looking to make the move from working with traditional big data stacks such as Spark and Hadoop to using cloud native technologies (DataFlow, Big Query, Docker/Kubernetes, Pub/Sub, Redshift, Cloud Functions).Candidates who also have strong software development skills and wishing to make the leap to working with Data at scale will also be considered. Responsibilities include:Designing and building end to end Data Engineering solutions on the Google Cloud Platform.Ensuring the platform is secure, compliant and efficient.Being a proactive member of DevOps / Agile scrum driven team; always looking for ways to tune and optimise all aspects of work delivered on the platform.Aligning work to both core development standards and architectural principles. Skills and experience required Strong programming skills in languages such as Python/Java/Scala including building, testing and releasing code into productionStrong SQL skills and experience working with relational/columnar databases (e.g. SQL Server, Postgres, Oracle, Presto, Hive, etc…)Knowledge of data modelling techniques and integration patternsPractical experience writing data analytic pipelinesExperience integrating/interfacing with REST APIs / Web ServicesExperience handling data securelyExperience with DevOps software delivery and CI/CD processesA willingness to learn and find solutions to complex problems Desirable:Experience migrating from on-premise data stores to cloud solutionsExperience of designing and building real/near real time solutions using streaming technologies (e.g. Dataflow/Apache Beam, Fink, Spark Streaming etc)Hands-on experience with cloud environments (GCP & AWS preferred)Building API's and apps using Python/JavaScript or an alternative languagePractical experience with traditional Big Data stacks (e.g Spark, Flink, Hbase, Flume, Impala, Hive etc)Experience with non-relational database solutions (e.g. Big Query, Big Table, MongoDB, Dynamo, HBase, Elasticsearch)Experience with AWS data pipeline, Azure data factory or Google Cloud DataflowWorking with containerization technologies (Docker, Kubernetes etc…)Experience working with data warehouse solutions including extracting and processing data using a variety of programming languages, tools and techniques (e.g. SSIS, Azure Data Factory, T-SQL, PL-SQL, Talend, Matillion, Nifi, AWS Data Pipelines)Exposure to visualisation technologies such as Looker and TableauKnowledge of and experience in automation technologies To apply online please use the 'apply' function, alternatively you may contact Chloe Chen at chloe.chen(@)randstad.com.sg. (EA: 94C3609 /R1768253)
    • permanent
    • S$80,000 - S$110,000 per year
    • full-time
    About the roleYou will have hands-on experience with leveraging data from a wide selection of data sources from different technologies e.g. SQL db, BigQuery, etc., and a keen understanding of data models and ETL processes. Using tools like R, Python, and any visualisation tools (when required), you are able to analyse, model, and visualise data effectively. Ideally you will have an MSc or PhD in a relevant field (e.g. Statistics, Mathematics) or similar hands-on experience.You will build predictive models using a myriad of dataYou will apply logical thinking and statistical learning techniques to obtain robust results the business can rely on for critical decisions.You will help respond to complex business questions beyond what business intelligence teams are capable of today.You will engage with the business and data product managers to shape and refine their questions.You will team up with Data Engineers and Data Product Managers to get advice and support on accessing data. skills and experience requiredA degree in Statistics/ML/Business Analytics or a science / engineering degree with a keen interest in statistics.PhDs are valued, but innovation and excellence is essentialStrong understanding of statistical modelling. Working experience of using advanced machine learning techniques to solve business challenges.Strong R/Python skills with a focus on statistical and ML packages; e.g. Scikit-Learn, Tensorflow, Keras, PyTorch, XGBoost, NumPy, SciPy.Working experience with cloud-based platforms. Comfortable querying modern cloud databases (e.g. BigQuery, Snowflake, Redshift).Successful use of software engineering best practices, including version control (Git, Mercurial), unit testing and working with Agile delivery principles.Proven track-record of using a rigorous, scientific approach to model building, testing and validation.Experience in data cleansing and blending (internally and externally) to drive richer insights. Desirable:Knowledge of how to build enterprise data science products in cloud environmentsDocker and Kubernetes experienceExperience productionising data science models that deliver demonstrable business valueExposure to ML Ops practices and thinkingFamiliar with Linux/Unix and shell scripting, experience in cloud computing is a plusData visualization and app development techniques (Shiny, Dash, App Engine) To apply online please use the 'apply' function, alternatively you may contact Chloe Chen at chloe.chen(@)randstad.com.sg. (EA: 94C3609 /R1768253).
    About the roleYou will have hands-on experience with leveraging data from a wide selection of data sources from different technologies e.g. SQL db, BigQuery, etc., and a keen understanding of data models and ETL processes. Using tools like R, Python, and any visualisation tools (when required), you are able to analyse, model, and visualise data effectively. Ideally you will have an MSc or PhD in a relevant field (e.g. Statistics, Mathematics) or similar hands-on experience.You will build predictive models using a myriad of dataYou will apply logical thinking and statistical learning techniques to obtain robust results the business can rely on for critical decisions.You will help respond to complex business questions beyond what business intelligence teams are capable of today.You will engage with the business and data product managers to shape and refine their questions.You will team up with Data Engineers and Data Product Managers to get advice and support on accessing data. skills and experience requiredA degree in Statistics/ML/Business Analytics or a science / engineering degree with a keen interest in statistics.PhDs are valued, but innovation and excellence is essentialStrong understanding of statistical modelling. Working experience of using advanced machine learning techniques to solve business challenges.Strong R/Python skills with a focus on statistical and ML packages; e.g. Scikit-Learn, Tensorflow, Keras, PyTorch, XGBoost, NumPy, SciPy.Working experience with cloud-based platforms. Comfortable querying modern cloud databases (e.g. BigQuery, Snowflake, Redshift).Successful use of software engineering best practices, including version control (Git, Mercurial), unit testing and working with Agile delivery principles.Proven track-record of using a rigorous, scientific approach to model building, testing and validation.Experience in data cleansing and blending (internally and externally) to drive richer insights. Desirable:Knowledge of how to build enterprise data science products in cloud environmentsDocker and Kubernetes experienceExperience productionising data science models that deliver demonstrable business valueExposure to ML Ops practices and thinkingFamiliar with Linux/Unix and shell scripting, experience in cloud computing is a plusData visualization and app development techniques (Shiny, Dash, App Engine) To apply online please use the 'apply' function, alternatively you may contact Chloe Chen at chloe.chen(@)randstad.com.sg. (EA: 94C3609 /R1768253).
    • permanent
    • full-time
    Opportunity to own the Data architecture designNewly created role due to business expansionAbout the companyOur client is a end user with multiple offices across Asia. With massive expansion plan, they are looking to recruit a new Big Data Engineer (Python, Spark & Hadoop) to join their team.About the jobYou will be responsible for:Creating and maintaining optimal data pipeline architecture. This includes assembling large, complex data sets to be ready for data analytics to support analytics initiativesDesigning technical architecture, roadmaps and delivery of big data solution for business stakeholders across different departments (Supply chian, Operations, sales and marketing, finance and etc) Analyse current business processes, identifying and translating data requirements into business improvement through data analytics. This includes developing technical solutions in areas of big data platform to fulfil various business use cases.Evaluating and implementing Big Data technology and ETLEstablishing data management processes and enforcing standards, processes, frameworks, tools and best practicesSkills and experience requiredAs a successful applicant, you will have at least 3 years of experience in Big Data (in Hadoop stack, Hive, Spark and etc). Candidate with proven track record in Python and SQL is required for this role.Candidate with exposure to DevOps (CI/CD) or cloud will be of added advantage. Whats on offerThis is an excellent opportunity to join a leading company with the opportunity to lead high value IT digital project with exposure to latest technology. To apply online please use the 'apply' function, alternatively you may contact Hoon Teck TAN at 6510 3633. (EA: 94C3609/ R1219669)
    Opportunity to own the Data architecture designNewly created role due to business expansionAbout the companyOur client is a end user with multiple offices across Asia. With massive expansion plan, they are looking to recruit a new Big Data Engineer (Python, Spark & Hadoop) to join their team.About the jobYou will be responsible for:Creating and maintaining optimal data pipeline architecture. This includes assembling large, complex data sets to be ready for data analytics to support analytics initiativesDesigning technical architecture, roadmaps and delivery of big data solution for business stakeholders across different departments (Supply chian, Operations, sales and marketing, finance and etc) Analyse current business processes, identifying and translating data requirements into business improvement through data analytics. This includes developing technical solutions in areas of big data platform to fulfil various business use cases.Evaluating and implementing Big Data technology and ETLEstablishing data management processes and enforcing standards, processes, frameworks, tools and best practicesSkills and experience requiredAs a successful applicant, you will have at least 3 years of experience in Big Data (in Hadoop stack, Hive, Spark and etc). Candidate with proven track record in Python and SQL is required for this role.Candidate with exposure to DevOps (CI/CD) or cloud will be of added advantage. Whats on offerThis is an excellent opportunity to join a leading company with the opportunity to lead high value IT digital project with exposure to latest technology. To apply online please use the 'apply' function, alternatively you may contact Hoon Teck TAN at 6510 3633. (EA: 94C3609/ R1219669)
    • permanent
    • full-time
    Opportunity to own the Data architecture designNewly created role due to business expansionAbout the company. Our client is a end user with multiple offices across Asia. With massive expansion plan, they are looking to recruit a new Big Data Engineer (Python, Spark & Hadoop) to join their team. About the job You will be responsible for: Creating and maintaining optimal data pipeline architecture. This includes assembling large, complex data sets to be ready for data analytics to support analytics initiativesDesigning technical architecture, roadmaps and delivery of big data solution for business stakeholders across different departments (Supply chian, Operations, sales and marketing, finance and etc)Analyse current business processes, identifying and translating data requirements into business improvement through data analytics. This includes developing technical solutions in areas of big data platform to fulfil various business use cases.Evaluating and implementing Big Data technology and ETLEstablishing data management processes and enforcing standards, processes, frameworks, tools and best practicesSkills and experience required As a successful applicant, you will have at least 3 years of experience in Big Data (in Hadoop stack, Hive, Spark and etc). Candidate with proven track record in Python and SQL is required for this role. Candidate with exposure to DevOps (CI/CD) or cloud will be of added advantage. Whats on offer This is an excellent opportunity to join a leading company with the opportunity to lead high value IT digital project with exposure to latest technology. To apply online please use the 'apply' function, alternatively you may contact Hoon Teck TAN at 6510 3633. (EA: 94C3609/ R1219669)
    Opportunity to own the Data architecture designNewly created role due to business expansionAbout the company. Our client is a end user with multiple offices across Asia. With massive expansion plan, they are looking to recruit a new Big Data Engineer (Python, Spark & Hadoop) to join their team. About the job You will be responsible for: Creating and maintaining optimal data pipeline architecture. This includes assembling large, complex data sets to be ready for data analytics to support analytics initiativesDesigning technical architecture, roadmaps and delivery of big data solution for business stakeholders across different departments (Supply chian, Operations, sales and marketing, finance and etc)Analyse current business processes, identifying and translating data requirements into business improvement through data analytics. This includes developing technical solutions in areas of big data platform to fulfil various business use cases.Evaluating and implementing Big Data technology and ETLEstablishing data management processes and enforcing standards, processes, frameworks, tools and best practicesSkills and experience required As a successful applicant, you will have at least 3 years of experience in Big Data (in Hadoop stack, Hive, Spark and etc). Candidate with proven track record in Python and SQL is required for this role. Candidate with exposure to DevOps (CI/CD) or cloud will be of added advantage. Whats on offer This is an excellent opportunity to join a leading company with the opportunity to lead high value IT digital project with exposure to latest technology. To apply online please use the 'apply' function, alternatively you may contact Hoon Teck TAN at 6510 3633. (EA: 94C3609/ R1219669)

Thank you for subscribing to your personalised job alerts.

It looks like you want to switch your language. This will reset your filters on your current job search.