Oportun. Inc

Sr. Data Engineer

Req No.
2021-10932
Department
Engineering
Type
Regular Full-Time
Remote / WFH
Yes
Job Locations
US-CA-San Carlos

Company Overview

ABOUT OPORTUN

Oportun (Nasdaq: OPRT) is a financial services company and digital platform that provides responsible consumer credit to hardworking people. Using A.I.-driven models that are built on years of proprietary customer insights and billions of unique data points, we have extended millions of loans and billions in affordable credit, providing our customers with alternatives to payday and auto title loans. In recognition of our responsibly designed products which help consumers build their credit history, we have been certified as a Community Development Financial Institution (CDFI) since 2009.

 

OPORTUN’S IMPACT

Since extending our first loan in 2006, Oportun has made over 4 million loans, totaling over $10 billion to hardworking low- and moderate-income individuals. In turn, Oportun has helped more than 905,000 people begin establishing the credit history required to enter the financial mainstream. At the same time, Oportun’s customers have saved an estimated $1.9 billion in interest and fees compared to the alternatives typically available to them.

 

Department Overview

ABOUT TECHNOLOGY @ OPORTUN

Artificial Intelligence and a digital platform are essential to our ability to fulfill Oportun’s financially inclusive mission. The Technology team @ Oportun is dedicated to this mission which we enable by creating, delivering, and maintaining elegant, intuitive, and performant systems to support the needs of our customers and business partners.

Overview

Oportun is looking for a Senior Software Engineer (Data Platform) to join our Data Engineering team. In this role you will play key part in developing our next generation Data Platform to support all our operation processes, data sciences and business intelligent data processing. Our team works with:

  • Big Data ecosystem
  • Relational data modeling and batch processing on SQL databases
  • Distributed processing framework like Spark
  • Streaming data platforms like Kafka or AWS Kinesis
  • Realtime stream processing frameworks like Flink, Spark Streaming or Kafka Stream
  • Cloud computing systems like Amazon Web Services
  • Big Data analytics tools like Hive or Spark SQL

 

If you have a passion for writing data processing software, including acquisition, cleansing, correlation, organization, analysis and machine learning, and the challenge of building a new data infrastructure from the ground up excites you, we would love to hear from you.

Responsibilities

  • Design, develop and maintain Java/Python based Data Platforms including Data Lake, Operational Datamart and Analytics Data Warehouses
  • Design and build scalable Java/Python based ELT/ETL workflows to transform and integrate data into Data Platform/Data Lake
  • Develop data strategy and roadmap for data technologies
  • Play multiple roles that span data architecture, design, data warehousing, and ELT/ETL processes
  • Work closely with product management, business, engineers, cross-functional analysts, and data scientist
  • Demonstrates master hands-on capability to drive components to delivery from inception to final product.
  • Recommends and contributes to software engineering best practices, including those that have enterprise-wide impact.
  • Takes accountability for the quality, total cost of ownership, maintainability and security of any component or application produced.
  • Performs as an expert in all parts of the software development lifecycle (e.g., coding, testing, development) and coaches other around such practices.
  • Converses in many technologies and learns new technologies quickly.
  • Ability to provide a clear and concise explanation of business strategy, technical concepts, designs or implementation to a non-technical audience.
  • Stays abreast of industry trends and technologies and knows when/how/if to apply them appropriately.

Qualifications

  • Master’s in Computer Science
  • 5+ years of experience working as a software developer in Data Engineering, Data Warehousing/BI team
  • 5+ years of experience in Java/Python development
  • 3+ years of experience in SQL and Relational Databases
  • Experience using AWS to build end to end distributed technical solutions (ALB, ECS, EC2, Fargate, Lambda,etc.) and as well as general cloud native applications.
  • Familiarity with messaging frameworks (RabbitMQ, Kafka, Kinesis) as well as relational (MySQL/Postgres) and non-relational (Mongo, Cassandra, ElasticSearch) databases is a plus.
  • Experience with Git and VCS, TDD and BDD frameworks for Python and Javascript.
  • Extensive experience working with structured and unstructured data platforms, ELT/ETL, and Data Modeling
  • Experience with Test Driven Development JUnit or TestNG frameworks
  • Knowledge and experience with big data systems such as Spark is a plus
  • Experience with Machine Learning and Statistical Frameworks is a plus
  • Experience with NoSQL Database like MongoDB is a plus

#LI-SC1

#LI-Remote

#IND

Options

Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed