More jobs:
Job Description & How to Apply Below
Experience:
6–10 Years
Location:
Remote (Work from Home)
Mode of Engagement: Full-time
No of Positions: 2
Educational
Qualification:
Bachelor’s degree in Computer Science, IT, or related field
Industry: IT / Software Services / Data & AI
Notice Period: Immediate Joiners Preferred
What We Are Looking For
Strong hands-on experience in Python-based web scraping and crawling using Requests, Scrapy, Selenium, and Playwright .
Deep hands-on experience working with JavaScript-heavy and dynamic websites using Selenium, Playwright , or similar browser automation frameworks.
Proven expertise handling large-scale enterprise data scraping across complex, high-traffic platforms.
Lead and mentor a web scraping team to build and scale enterprise-grade crawling solutions using Python, Requests, Scrapy, Selenium, and Playwright.
Strong understanding of HTTP protocols, cookies, sessions, headers, tokens, and browser storage (local/session storage) .
Hands-on experience with proxy rotation, IP management, routers, rate limiting, fingerprinting, and anti-bot evasion techniques .
Ability to design and maintain scalable, reliable, and fault-tolerant crawling architectures .
Responsibilities
Build scalable ETL pipelines using Python and AWS
Develop and maintain web scraping systems (Scrapy, Selenium, Beautiful Soup)
Integrate OpenAI / Azure OpenAI APIs for structured data extraction
Build workflow automation using n8n or similar tools
Implement orchestration using Dagster or Airflow
Manage structured storage using Postgre
SQL, S3, Parquet
Lead/mentor junior engineers.
Qualifications
6–10 years of hands-on experience in web scraping and crawling using Python .
Strong practical knowledge of Requests, Scrapy, Selenium, and Playwright, Python, AWS (RDS/S3/EC2), OpenAI API, ETL design
Proven experience scraping JavaScript-heavy, dynamic, and access-controlled enterprise websites .
Deep understanding of cookies, sessions, headers, proxies, IP rotation, routers, and anti-detection strategies .
Experience working with SQL/No
SQL databases for structured data storage.
Strong debugging, system design, and problem-solving skills.
Ability to independently own and scale scraping systems in a remote environment.
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×