Arcadia Healthcare Solutions

  • Big Data Software Engineer

    Location
    US-PA-Pittsburgh
  • Overview

    In this position you will work with a big data engineering group to design, install and test data processes which ingests and analyzes over a billion records every night. Leveraging technologies like Apache Spark and Nifi, SQL Server, and ElasticSearch you will implement data integration projects on cloud platforms such as AWS. The ideal candidate would be passionate about using technologies to solve complex problems in a distributed, non-homogeneous environment.

    Top Reasons to work for us

    • Opportunity to work for an awesome software company that continues to grow in size and scope
    • Opportunity to work with a highly scalable cloud platform
    • Opportunity to develop and support a highly disruptive platform that is changing healthcare analytics

    General Principles

    • Motivated by bleeding edge technologies and methodologies
    • Track record in developing and delivering work into production
    • Strong Fundamentals in database queries and Functional Programming Best Practices
    • Versatile (Full Stack and Full Cycle)
    • Proven Ability to Adopt New Technologies
    • Craftsmanship
    • Committed, Disciplined, Self-Motivated, and Self Organizing
    • Find a way to move forward
    • Collaboration throughout software development life cycle
    • Contribute to the continuous improvement of our software development processes

    What’s In It For You

    • Opportunity to be part of a team creating a big data platform that is drastically improving healthcare analytics
    • Awesome work environment (teleworking opportunities considered too)
    • Competitive compensation
    • Great benefits like flextime time off
    • Stocked kitchen with snacks and beverages and more

     

    Responsibilities

    What will you be doing

    • Work with product owner and team members to design and create reports and ETL functionality of healthcare data
    • Work with big data experts in transforming legacy SQL Server products into a robust big data solution
    • Work with massive datasets and using technology to transform these into valuable assets
    • Design and implement software components
    • Perform code reviews
    • Unit Testing
    • Integration Testing
    • Deploy software components
    • Groom Features (Epics Definition, Story Estimates, Tasks Breakdown)
    • Manage code repositories
    • Establish and enforce software versioning
    • Establish and maintain efficient local development environments
    • Provide, analyze, and respond to software development metrics such as Feature Lifecycle and Burn Down
    • Provide feedback and recommendations to improve software development processes

    Qualifications

    What you need for this position

    Required:

    • At least 0-5 years of related work experience
    • Experience writing procedures and functions with one or more databases such as MySql, Postgres, MS SqlServer, and/or Oracle
    • Experience with one of more languages such as Java, Python and/or Scala
    • Experience working with complex data problems

    Preferred:

    • Distributed DCOS or Hadoop-like technologies running Spark, Storm and/or Kafka
    • AWS experience
    • Big data experience

     

    data, MS SQL Server, Microsoft, Hadoop, spark, Extract, transform, load, ETL, open source, integration, java, python, scala, apache

    Options

    Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
    Share on your newsfeed