Advertisement

[Hiring] Data Engineer, Pipelines, Structured Markup at Vulcury

Job Overview

  • Date Posted
    April 8, 2026
  • Expiration date
    --
  • Experience
    Fresh, 1 Year, 2 Years, 3 Years, 4 Years, 5 Years
  • Gender
    Both
  • hiringOrganization

    Vulcury

Job Description

About Us

At Vulcury, we help businesses thrive through tailored consulting and bring innovative ideas to life with our venture studio.
We believe that the future of business is built on a foundation of innovation, strategy, and execution. That’s why we’ve developed two distinct yet complementary divisions:
Consulting Division
We provide strategic insights and tailored solutions that help businesses overcome challenges, scale operations, and seize market opportunities.
Venture Studio
Our studio transforms ideas into impactful startups by leveraging our expertise in ideation, incubation, and execution.
Together, these divisions allow Vulcury to tackle complex problems, create new opportunities, and make a meaningful impact on businesses and industries.

 

Job Description

This is a remote position.

US – Data Engineer (Pipelines & Structured Markup), Part Time

Title: Data Engineer – Pipelines & Structured Markup
Location: US (Part Time, Remote or Hybrid)
Company: Vulcury LLC

Role Overview

Vulcury is building a manufacturing intelligence infrastructure that converts raw interactions — interviews, transcripts, CAD uploads, commercial discussions — into structured, queriable data objects.

We are seeking a Part Time Data Engineer to design and maintain ingestion pipelines and structured transformation workflows that power our internal semantic “truth layer.”

This is not a reporting role.
This is a semantic infrastructure role.

Responsibilities

  • Build and maintain ingestion pipelines (Python-based ETL/ELT)
  • Design structured transformation workflows using dbt, SQLMesh, or equivalent
  • Convert unstructured transcripts and documents into normalized database records
  • Maintain PostgreSQL architecture (structured tables, JSONB, indexing strategy)
  • Develop attribute extraction frameworks for technical, commercial, and risk signals
  • Ensure data quality, consistency, and lineage from raw interaction to structured output
  • Collaborate with AI/ML engineers to ensure clean model inputs

Requirements

Required Skills

  • Strong Python (data pipelines, orchestration)
  • Advanced SQL (PostgreSQL preferred)
  • Experience with ETL/ELT frameworks (dbt, Airflow, SQLMesh, etc.)
  • Experience handling semi-structured data (JSON, transcripts, document parsing)
  • Strong schema design and normalization skills
  • Familiarity with cloud storage systems (S3 or equivalent)

Nice to Have

  • Experience building semantic layers or knowledge graphs
  • Experience working with manufacturing or technical data
  • Familiarity with vector databases

Benefits

What Success Looks Like

  • Raw interviews automatically convert into structured records
  • Attribute confidence scoring flows downstream cleanly
  • Data lineage is fully traceable
  • Query performance remains stable as data volume scales
Job Apply Type
External URL