D4Vinci/Scrapling

🕷️ An adaptive Web Scraping framework that handles everything from a single request to a full-scale crawl!

View on GitHub

What it does

Scrapling is a Python tool that automatically collects data from websites at scale, and it's smart enough to keep working even when those websites change their layout or try to block automated visitors. Think of it as a self-healing data collection robot that can quietly gather information from across the web without getting shut out.

Why it matters

For any product that depends on external web data — pricing intelligence, market research, lead generation, or competitive monitoring — this dramatically reduces the engineering effort and ongoing maintenance cost of keeping those data pipelines alive. With over 22,000 stars on GitHub, it signals strong market demand for resilient, low-friction web data collection, which is increasingly a competitive advantage across industries.

Why it's trending

Web scraping has become a critical infrastructure problem for AI teams and data businesses, and Scrapling is catching fire because it solves the part that always breaks — when sites update their layouts or start blocking bots, most scrapers just fail silently. The project added over 8,100 stars this week alone and is sustaining that pace with 114 commits in the last 30 days, signaling that this isn't a viral moment but an active, fast-moving project that builders are genuinely adopting. With 3,265 forks and a small but highly productive team of 15 contributors driving that commit volume, this looks like serious infrastructure being built by practitioners for practitioners.

49Hot

Gaining traction — heating up

Stars
49.6k
Forks
4.7k
Contributors
15
Language
Python
Downloads (7d)
95.1k

pypi/scrapling

Score updated May 13, 2026

Related projects

Airbyte is an open-source tool that automatically moves data from over 600 sources — like databases, apps, and APIs — into wherever your team needs it, such as data warehouses or AI applications. Think of it as a universal plumbing system for data, available either as a self-hosted solution or a managed cloud service.

// why it matters As AI products increasingly depend on having clean, connected data, Airbyte gives builders a head start by eliminating the costly, time-consuming work of building custom data pipelines from scratch. With 1,196 contributors and 21,000+ stars, it has strong community momentum and represents a serious open-source alternative to expensive commercial data integration vendors like Fivetran or Stitch.

Python21.3k stars5.2k forks1196 contrib

Apache Spark is an open-source platform that lets companies process and analyze massive amounts of data extremely fast — think analyzing billions of records in seconds rather than hours. It works across multiple programming languages and handles everything from running database-style queries to training machine learning models, all within a single system.

// why it matters With over 43,000 stars and 3,400 contributors, Spark is effectively the industry standard for big data processing, meaning any data-heavy product — from analytics dashboards to AI pipelines — is likely built on or competing with it. Founders building data-intensive products should know that Spark is the backbone most enterprises already trust, making it a safe foundation to build on or a benchmark to measure alternatives against.

Scala43.3k stars29.2k forks3403 contrib

Apache Airflow is an open-source platform that lets teams build, schedule, and monitor automated workflows — think of it as a programmable system that ensures the right tasks run in the right order at the right time, whether that's pulling data from APIs, running reports, or triggering business processes. With over 45,000 stars and 4,000+ contributors, it has become one of the most widely adopted tools for orchestrating complex, multi-step data operations across organizations of all sizes.

// why it matters For any company building data-driven products or AI features, Airflow solves a critical operational problem: reliably moving and transforming data at scale without manual intervention, which is a foundational requirement before any meaningful analytics or machine learning can happen. Its massive adoption means a huge talent pool already knows it, its ecosystem of integrations is extensive, and betting on it carries low platform risk — making it a safe, strategic choice for teams building data infrastructure.

Python45.4k stars17.1k forks4292 contrib4289.7k dl/wk

NumPy is the foundational software library that lets Python programs work with large sets of numbers and mathematical data quickly and efficiently — think of it as the engine that powers most data analysis and scientific software built in Python. With over 2,000 contributors and decades of development, it handles everything from basic arithmetic on massive datasets to complex mathematical operations used in research and industry.

// why it matters Nearly every major data science, AI, and analytics tool built in Python — including TensorFlow, PyTorch, and Pandas — depends on NumPy under the hood, meaning it sits at the foundation of a multi-billion dollar software ecosystem. For builders, this means NumPy's stability, performance, and adoption make it a safe, battle-tested dependency when building data-heavy products, and its 32,000 GitHub stars signal it's effectively an industry standard rather than a niche tool.

Python32.0k stars12.4k forks2071 contrib
// SUBSCRIBE

The repos that moved this week, why they matter, and what to watch next. One email. No noise.