Just save this as karma.py and run it with Python 3.6 or higher.
import requests
import math
INSTANCE_URL = "https://feddit.de"
TARGET_USER = "ENTER_YOUR_USERNAME_HERE"
LIMIT_PER_PAGE = 50
l = Lemmy(INSTANCE_URL)
res = requests.get(f"{INSTANCE_URL}/api/v3/user?username={TARGET_USER}&limit={LIMIT_PER_PAGE}").json()
totalPostScore = 0
totalCommentScore = 0
page = 1
while len(res["posts"])+len(res["comments"]) > 0:
totalPostScore += sum([ x["counts"]["score"] for x in res["posts"] ])
totalCommentScore += sum([ x["counts"]["score"] for x in res["comments"] ])
page += 1
res = requests.get(f"{INSTANCE_URL}/api/v3/user?username={TARGET_USER}&limit={LIMIT_PER_PAGE}&page={page}").json()
print("Post karma: ", totalPostScore)
print("Comment karma: ", totalCommentScore)
print("Total karma: ", totalPostScore+totalCommentScore)
I’ve not used
requests
, but yes their docs make it look like it really is that easy: https://requests.readthedocs.io/en/latest/user/quickstart/Looks like the
.json()
call just returns a dictionary (or maybe a list of dictionaries), which means you can use all of python’s normal dictionary methods to find the data you’re looking for!Thanks for the link! This looks like an absurdly powerful library for HTTP needs and output manipulation from the perspective of a scraping neophyte.