Croydon Harris global Ltd, 3rd Floor One Croydon, 12-16, Addiscombe Road, Croydon CR0 0XT
Data Engineer 2022-04-14 Harris Global are currently on the look out for a Data Engineer to join our Financial Services client on a permanent basis. Harris Global 2022-05-14

Data Engineer

London, Newcastle, Edinburgh / permanent / £85k

Harris Global are currently on the look out for a Data Engineer to join our Financial Services client on a permanent basis. The successful candidate can be based out of the London, Newcastle or Edinburgh office and will possess strong IT skills, data governance and analytical expertise.

Skills to include:

  • At least three years or more of work experience in data management disciplines including data integration, modelling, optimisation and data quality, and/or other areas directly relevant to data engineering responsibilities and tasks
  • At least one year of experience working in cross-functional teams and collaborating with business stakeholders in support of a departmental and/or multi-departmental data management and analytics initiative.
  • Experience working with popular data discovery, analytics and BI software tools like PowerBI, Tableau, Qlik and others for semantic-layer-based data discovery.
  • Basic understanding of popular open-source and commercial data science platforms such as Python, R, KNIME, Alteryx, and others is a plus but not required/compulsory.
  • Strong experience with various Data Management architectures like Data Warehouse, Data Lake, Data Hub and the supporting processes like Data Integration, Governance, Metadata Management
  • Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata and workload management.
  • Strong experience with popular database programming languages for relational databases (SQL, T-SQL) and certifications or demonstrable knowledge on upcoming NoSQL/Hadoop oriented databases like CosmosDB.
  • Knowledge of working with SQL on Hadoop tools and technologies including HIVE, Azure Synapse Analytics (SQL Data Warehouse), Presto, and others from an open source perspective and Azure Data Factory (ADF), Databricks, and others from a commercial vendor perspective.
  • Strong experience with advanced analytics tools for Object-oriented/object function Scripting using languages such as Python, Java, C++, Scala, R, and others.
  • Strong experience in working with both open-source and commercial message queuing technologies such as Kafka, JMS, Azure Service Bus, Amazon Simple queuing Service, and others, stream data integration technologies such as Apache Nifi, Apache Beam, Apache Kafka Streams, Amazon Kinesis, and stream analytics technologies such as Apache Kafka KSQL Apache Spark Streaming Apache Samza, others.
  • Strong experience in working with DevOps capabilities like version control, automated builds, testing and release management capabilities using tools eg Git, Azure DevOps, Jenkins, Puppet, Ansible.
  • Exposure to hybrid deployments: Cloud and On-premise
  • Demonstrated ability to work across multiple deployment environments including cloud, on-premises and hybrid], multiple operating systems and through containerization techniques such as Docker, Kubernetes, AWS Elastic Container Service and others.
  • Experience of working with data science teams in refining and optimizing data science and machine learning models and algorithms a plus but not required/compulsory.