- Poshmark
- Redwood,
- 3 months ago
- $124,700 – $208,850
Job Description
About Poshmark
Poshmark is a leading fashion resale marketplace powered by a vibrant, highly engaged community of buyers and sellers and real-time social experiences. Designed to make online selling fun, more social and easier than ever, Poshmark empowers its sellers to turn their closet into a thriving business and share their style with the world. Since its founding in 2011, Poshmark has grown its community to over 130 million users and generated over $10 billion in GMV, helping sellers realize billions in earnings, delighting buyers with deals and one-of-a-kind items, and building a more sustainable future for fashion. For more information, please visit www.poshmark.com, and for company news, visit newsroom.poshmark.com.
The Big Data team is a central player in the Poshmark organization. Our mission is to build a world-class big data platform to bring value out of data for us and for our customers. Our goal is to democratize data, support exploding business, provide reporting and analytics self-service tools, and fuel existing and new business critical initiatives.
The Data Engineering team at Poshmark is looking for an experienced software engineer to scale Datalake, ensuring real-time access to quality data for all the stakeholders. The role provides opportunity to showcase in-depth software development skills to build and maintain real-time and batch data pipelines with a focus on scalability and optimizations while collaborating with Data Science, Analytics, and Platform Engineering teams to build tools to access petabyte scale data.
Responsibilities
-
Build highly scalable, available, fault-tolerant data processing systems using AWS technologies, MapReduce, Hive, Kafka, Spark, Flink and other big data technologies to handle batch and real-time data processing over 10s of terabytes of data ingested every day and a petabyte-sized data warehouse.
-
Responsible for taking care of some of the most critical data pipelines at Poshmark.
-
Participate in architecture discussions, influence product roadmap, build best practices, take ownership and responsibility over new projects.
-
Create WBS (Work breakdown structure) for projects and execute them, taking full ownership, with minimal guidance.
Desired Skills & Experience
-
3+ years of relevant software engineering experience.
-
2+ year of Data Engineering experience.
-
Excellent technical problem solving using data structures and algorithms, with emphasis on optimization and code quality.
-
Expertise in architecting and building large-scale data processing systems using Big Data technologies like Hadoop, Hbase, Spark, Kafka, Druid, Flink, DataLake, Redshift etc.
-
Eagerness to try out newer technologies and adopt them as and when needed.
-
Object Oriented Patterns, Scala / Java / Python / C++ / SQL / Apache Spark, Flink, Hadoop, AWS S3, Redshift / Postgresql, Kinesis, EMR, Apache Airflow, Jenkins.
