I m facing an issue with web scraping on https://dexscreener.com. I ve discovered that in order to receive a server response with a status code of 200, it s necessary to specify not only the user-agent but also valid cookies. I m currently manually copying cookies from the Firefox browser. If I don t provide cookies, I receive a 403 error. My own cookies work only for 30 minutes after creation. I need to find a way to bypass this because web scraping should happen automatically without manual cookie replacement.Thanks to everyone who joins the discussion. I would greatly appreciate it if you could assist me!
Here s my current code:
import requests
url = "https://dexscreener.com"}
cookies = {"__cf_bm": "*my_cookies*"}
headers = {
"User-Agent": "*my_user-agent*"
}
session = requests.Session()
session.cookies.update(cookies)
session.headers.update(headers)
response = session.get(url)
print(response)