
Become a Scout
At RiskScout, we’re building technology that empowers community bankers to fight financial crime and protect the people they serve. We work with a passionate, collaborative team that’s driven to solve real problems for real banks and credit unions. Join us in strengthening the financial institutions that power our communities.

Join the Team
Autonomy
We trust you to manage your time and workload because you are the expert.
Mastery
Push the limit of your skills turning complex compliance into simple solutions.
Purpose
We are a small team, so every Scout’s contribution has a big impact on our success.
Become a Scout and unlock:
-
Competitive Pay
-
Unlimited PTO
-
Equity Packages
-
401K
-
Medical, Dental, & Vision benefits
Job Description
As a small team of generalists, we're constantly facing new challenges. We value experienced individuals who can collaborate effectively with subject matter experts to understand the problem, develop a plan, and solve it. Solid experience with Ruby, Ruby on Rails, GraphQL, and React is crucial, though you don't need to know every tool we use.
Key Responsibilities
Designing and developing software solutions using Ruby on Rails, GraphQL, and React.
Partnering with experts to understand, plan, and solve technical challenges effectively.
Leading software development projects, ensuring timely completion and adherence to quality standards.
Skilled in SQL for conducting analytics, performing data analysis, and building dashboards.
Conducting code reviews and maintaining high standards of code quality.
Identifying and addressing performance bottlenecks in applications for efficiency and scalability.
Keeping up-to-date with the latest developments in web technology and improving the tech stack and development processes.
Ability to troubleshoot and improve CI/CD systems such as Github Actions, Gitlab CI, CircleCI
Requirements
7+ years of experience in software development, particularly in a full-stack capacity.
Expertise in Ruby on Rails for backend API development and React for frontend development.
Proven experience with GraphQL or RESTful API design and implementation.
Strong knowledge of JavaScript and/or TypeScript.
Capability to work with event-based systems and understand their architecture.
Experience in implementing and managing background job processing.
Demonstrated expertise in data modeling and database design.
Solid understanding of software development best practices and Git.
Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent work experience.
Tech Stack
React frontend powered by a Ruby on Rails GraphQL API
FastAPI and NestJS services
Postgres, Redis, Elasticsearch
Sidekiq background processing
Dagster
Google Cloud and Kubernetes for hosting
Google Big Query for Analytics
Gitlab, Slack, Notion, Linear
Benefits
Work-life balance
Competitive salary and equity
Company-paid, personal tax advisory services
Work remotely, no commuting to the office
Paid co-working space if preferred
Health, dental, and vision insurance (US)
4 weeks of paid Discretionary Time Off (DTO)
Generous holiday Schedule (US Bank Holidays plus week between Christmas and NYE)
Parental leave
Sick leave
401k
To apply for this position, scroll to the application form in the next section.
Location: Remote
Department: Engineering / Data Team
Reports to: Head of Engineering or Director of Data
Job Type: Full-Time
Overview
We are seeking a Data Engineer to join our analytics and data science team focused on building and maintaining high-quality data pipelines across multiple data sources. The ideal candidate has experience working with file-based data exports from a variety of banking systems—such as Jack Henry, Fiserv, FIS, Finastra, and CSI—as representative examples of common financial data sources. This role centers around designing scalable pipelines, ensuring data quality, and transforming heterogeneous financial feeds into a consistent internal schema that supports AML, fraud analytics, and regulatory reporting.
Key Responsibilities
Design & Develop Pipelines: Build and maintain scalable ETL/ELT pipelines to ingest, normalize, and transform data from structured and semi-structured formats (CSV, XML, JSON, SFTP drops, etc.).
Data Standardization: Translate diverse data feeds—including those exported from banking systems—into a unified schema used across AML, fraud, and customer risk analysis.
Data Quality & Validation: Develop robust validation logic and monitoring tools to detect anomalies and ensure data integrity across ingestion layers.
Vendor-Agnostic Integration: Map and ingest data from a wide variety of banking software vendors (e.g., Jack Henry, Fiserv, FIS, Finastra, CSI), with the understanding that these are common industry examples—not requirements.
Cross-Team Collaboration: Work with data scientists, product managers, and compliance experts to meet evolving analytical and regulatory needs.
Serve as a technical liaison with clients, helping them structure their data exports, diagnose issues with feed delivery or formatting
Pipeline Optimization: Tune pipeline performance, storage efficiency, and scalability across batch and streaming environments.
Documentation & Compliance: Maintain clear, auditable documentation for data source mappings, transformations, and quality rules.
Security First: Prioritize secure handling of sensitive financial and personally identifiable information using industry best practices.
Required Qualifications
3+ years of experience as a Data Engineer or similar role working with large-scale data transformation pipelines.
Strong proficiency in Python and SQL.
Familiarity with ingesting file-based data formats (e.g., fixed-width, CSV, XML, JSON).
Experience integrating data from banking systems or other financial applications (specific vendors are not required).
Understanding of data modeling, schema design, and schema harmonization across diverse data feeds.
Hands-on experience with data pipeline orchestration tools such as Airflow, dbt, Dagster, or similar.
Experience with cloud platforms like GCP and/or AWS (S3, Lambda, Glue, RDS, etc.).
A strong focus on data quality, integrity, and traceability in regulated environments.
Bonus Experience
Exposure to AML or fraud analytics pipelines.
Experience working closely with data scientists or ML engineers.
Familiarity with streaming data systems like Kafka or Kinesis.
Previous work with financial institutions, fintechs, or regtech platforms.
Experience with data warehousing tools like BigQuery, Redshift, or Snowflake.
Why Join RiskScout?
Play a critical role in building infrastructure that supports real-time fraud detection and financial crime prevention.
Collaborate with a driven, talented team in a mission-focused startup environment.
Enjoy remote work flexibility with a culture of ownership, transparency, and meaningful technical work.
Solve novel data challenges at the intersection of compliance, regulation, and technology.
To apply for this position, scroll to the application form in the next section.

