Arcadia Healthcare Solutions

  • Software Engineer in Big Data Integration

    Location
    US-PA-Pittsburgh
  • Overview

    In this position you will work with a big data engineering group to design, install and test data processes which ingest and analyze over a billion records every night. Leveraging technologies like Apache Spark and Nifi, SQL Server, and ElasticSearch you will implement data integration projects on cloud platforms such as AWS. The ideal candidate would be passionate about using technologies to solve complex problems in a distributed, non-homogeneous environment.

     

    Top Reasons to Work with Us

    • Opportunity to work for an awesome software company that is growing
    • Opportunity to work with a highly scalable cloud platform
    • Opportunity to develop a highly disruptive platform that is going to change healthcare analytics

    What’s In It for You

    • Opportunity to be part of a team creating a big data platform that is drastically improving healthcare analytics
    • Awesome work environment (teleworking opportunities considered too)
    • Competitive compensation
    • Great benefits like flextime time off
    • Stocked kitchen with snacks and beverages and more

    General Principals 

    • Motivated by bleeding edge technologies and methodologies
    • Track record in developing and delivering work into production
    • Strong Fundamentals in database queries and Functional Programming Best Practices
    • Versatile (Full Stack and Full Cycle)
    • Proven Ability to Adopt New Technologies
    • Craftsmanship
    • Committed, Disciplined, Self-Motivated, and Self Organizing
    • Find a way to move forward
    • Collaboration throughout software development life cycle
    • Contribute to the continuous improvement of our software development processes

    Responsibilities

    What You Will Be Doing

    • Work with teams and clients to extract healthcare data
    • Identifying common data transformation patterns and implementing reusable, scalable solutions
    • Executing data discovery and identifying value within new datasets which lead to new methods of extraction and/or tools
    • Working with massive datasets and using technology to transform these into valuable assets
    • Create queries in SQL, Spark SQL, or other languages to cleanse and transform incoming data into standard formats
    • Design and implement software components
    • Perform code reviews
    • Unit Testing
    • Integration Testing
    • Deploy software components
    • Groom Features (Epics Definition, Story Estimates, Tasks Breakdown)
    • Manage code repositories
    • Establish and enforce software versioning
    • Establish and maintain efficient local development environments
    • Provide, analyze, and respond to software development metrics such as Feature Lifecycle and Burn Down
    • Provide feedback and recommendations to improve software development processes

    Qualifications

    Required

    • At least X years of related work experience
      (Level 1: 0-2 years, Level 2: 2 -5 years, Level 3: 5-10 years, Architect: 10+ years)
    • Experience with one or more databases such as MySql, Postgres, MS SqlServer, and/or Oracle
    • Experience with one of more languages such as Java, Python and/or Scala
    • Experience working with complex data problems

    Preferred

    • Distributed DCOS or Hadoop-like technologies running Spark, Storm and/or Kafka
    • AWS experience
    • Big data experience

     

    Options

    Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
    Share on your newsfeed