-
Notifications
You must be signed in to change notification settings - Fork 83
Description
╭──────────────────────────────────────────────────────────────────────────────╮
╰──────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────╮
╰──────────────────────────────────────────────────────────────────────────────╯
? Which models do you want to scrape
: kitalovexoxovip
[scraper.scrape_context_manager:363] scraper.py:363
starting script
[profiles.print_current_profile:138] Using profile: profiles.py:138
main_profile
Status - UP
[scraper.normal_post_process:175] Progress 1/1 model scraper.py:175
? Scrape entire paid page [WARNING ONLY USE IF NEEDED i.e for DELETED MODELS] Tr
ue
? Which area(s) would you like to scrape? ['Profile', 'Timeline', 'Pinned', 'Arc
hived', 'Highlights', 'Stories', 'Labels']
[profile.print_profile_info:103] Name: Kita | Username: profile.py:103
kitalovexoxovip | ID: 315668348 | Joined:
2023-03-15T00:00:00+00:00
- 79 posts
-- 8 photos
-- 70 videos
-- 0 audios - 0 archived posts
[of.process_profile:173] Avatar : of.py:173
https://public.onlyfans.com/files/i/ib/ib4/ib4u0v9xkotbhct3avvre1a5o5q
9eucp1678887215/315668348/avatar.jpg
╭──────────────────────────────────────────────────────────────────────────────╮
╰──────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────╮
╰──────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────╮
╰──────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────╮
│ Attempt 1/10 │
╰──────────────────────────────────────────────────────────────────────────────╯
⠋ Getting highlight...
Pages Progress: 0
╭──────────────────────────────────────────────────────────────────────────────╮
╰──────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────╮
╰──────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────╮
╰──────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────╮
╰──────────────────────────────────────────────────────────────────────────────╯
[filters.dupefilter:56] Removing duplicate media filters.py:56
[filters.posts_type_filter:89] filtering Media to filters.py:89
images,audios,videos
[misc.download_picker:34] kitalovexoxovip (0 photos, 0 videos, 0 misc.py:34
audios, 0 skipped, 0 failed)
╭──────────────────────────────────────────────────────────────────────────────╮
╰──────────────────────────────────────────────────────────────────────────────╯
? Do you want to continue with script [scraper.scrape_context_manager:374] scraper.py:374
===========================
Script Finished
Run Time: 0:00:43
===========================
? Do you want to continue with script Yes
? What would you like to do? Download content from a user
? Do you want to reset username selection No
^[[BWelcome, Ori k | u297424175
╭──────────────────────────────────────────────────────────────────────────────╮
╰──────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────╮
╰──────────────────────────────────────────────────────────────────────────────╯
[scraper.scrape_context_manager:363] scraper.py:363
starting script
[profiles.print_current_profile:138] Using profile: profiles.py:138
main_profile
Status - UP
[scraper.normal_post_process:175] Progress 1/1 model scraper.py:175
[?] Which area(s) would you like to scrape?
◉ Profile
○ Timeline
❯ ○ Pinned
○ Archived
○ Highlights
○ Stories
○ Messages
○ Purchased
○ Labels
Activity
datawhores commentedon Aug 31, 2023
Can you rerun the script with
ofscraper --output debug
I'm guessing your getting hit by the auto-dater
But I'm not sure if there is another issue
orithecapper commentedon Sep 1, 2023
[ori@GayGaByte Onlyfans_Data]$ ofscraper --output debug
DEBUG [start.startvalues:103] Namespace(version='3.2.2', config=None, profile=None, log='OFF', discord='OFF', start.py:103
output='DEBUG', username=set(), excluded_username=set(), daemon=None, original=False, letter_count=False,
action=None, dupe=False, posts=[], excluded_posts=[], filter='.*', neg_filter=None, scrape_paid=False,
download_type=None, label=None, before=None, after=None, mediatype=[], size_max=None, size_min=None,
mass_msg=None, timed_only=None, account_type=None, renewal=None, sub_status=None,
user_list={'ofscraper.main'}, black_list=[], sort='name', desc=False, users_first=False, no_cache=False,
key_mode=None, dynamic_rules=None, part_cleanup=False, downloadbars=False, downloadsems=None,
downloadthreads=None, command=None, excluded_post=[])
DEBUG [start.startvalues:104] Linux-6.4.12-lqx1-2-lqx-x86_64-with-glibc2.38 start.py:104
DEBUG [start.startvalues:105] {'main_profile': 'main_profile', 'save_location': start.py:105
'/run/media/your_username/Kek/Onlyfans_Data', 'file_size_limit': 0, 'file_size_min': 0, 'dir_format':
'{model_username}/{responsetype}/{mediatype}/', 'file_format': '{filename}.{ext}', 'textlength': 0,
'space-replacer': ' ', 'date': 'MM-DD-YYYY', 'metadata':
'{configpath}/{profile}/.data/{model_username}_{model_id}', 'filter': ['Images', 'Audios', 'Videos'],
'threads': 5, 'code-execution': False, 'custom': None, 'mp4decrypt':
'/home/your_username/.config/ofscraper/bin/mp4decrypt', 'ffmpeg':
'/home/your_username/.config/ofscraper/bin/ffmpeg', 'discord': '', 'private-key': None, 'client-id': None,
'key-mode-default': 'cdrm2', 'keydb_api': '', 'dynamic-mode-default': 'deviint', 'partfileclean': False,
'backend': 'aio', 'download-sems': 6, 'maxfile-sem': 0, 'downloadbars': False, 'cache-mode': 'sqlite',
'responsetype': {'timeline': 'Posts', 'message': 'Messages', 'archived': 'Archived', 'paid': 'Messages',
'stories': 'Stories', 'highlights': 'Stories', 'profile': 'Profile', 'pinned': 'Posts'}}
[start.startvalues:106] config path: /home/your_username/.config/ofscraper/config.json start.py:106
[start.startvalues:107] profile path: /home/your_username/.config/ofscraper/main_profile start.py:107
[start.startvalues:108] log folder: /home/your_username/.config/ofscraper/logging start.py:108
DEBUG [start.startvalues:109] ssl DefaultVerifyPaths(cafile='/etc/ssl/cert.pem', capath='/etc/ssl/certs', start.py:109
openssl_cafile_env='SSL_CERT_FILE', openssl_cafile='/etc/ssl/cert.pem', openssl_capath_env='SSL_CERT_DIR',
openssl_capath='/etc/ssl/certs')
DEBUG [start.startvalues:110] python version 3.11.3 start.py:110
DEBUG [start.startvalues:111] certifi /usr/lib/python3.11/site-packages/certifi/cacert.pem start.py:111
DEBUG [start.startvalues:112] number of threads available on system 8 start.py:112
Welcome to OF-Scraper Version 3.2.2
? What would you like to do? Edit advanced config.json settings
[config.edit_config_advanced:186] config path: /home/your_username/.config/ofscraper/config.json config.py:186
? Number of Download processes/threads: 7
? Number of semaphores per thread: 6
? Max Number of open files per thread: 0
? What would you like to use for dynamic rules
https://grantjenks.com/docs/diskcache/tutorial.html#caveats deviint
? sqlite should be fine unless your using a network drive
See sqlite
? Make selection for how to retrive long_message cdrm2
? keydb api key:
? Enter path to client id file
? Enter path to private-key
? Select Which Backend you want:
httpx
? Enable auto file resume No
? edit custom value:
null
? show download progress bars
This can have a negative effect on performance with lower threads Yes
config.jsonhas been successfully edited.? Do you want to continue with script Yes
? What would you like to do? Download content from a user
[config.auto_update_config:142] Auto updating config... config.py:142
DEBUG [scraper.check_config:351] final mp4decrypt path /home/your_username/.config/ofscraper/bin/mp4decrypt scraper.py:351
DEBUG [scraper.check_config:352] final ffmpeg path /home/your_username/.config/ofscraper/bin/ffmpeg scraper.py:352
Welcome, Ori k | u297424175
⠋ Getting your subscriptions (this may take awhile)...DEBUG [subscriptions.scrape_subscriptions_active:81] usernames offset 0: usernames retrived -> subscriptions.py:81
['jinju0721', 'baesylvie', 'tymwitsfree', 'cockplug', 'elly0001', 'adam_blake',
'princesskitti3_free', 'u249726743', 'hotjulia1992']
⠴ Getting your subscriptions (this may take awhile)...DEBUG [subscriptions.scrape_subscriptions_active:81] usernames offset 10: usernames retrived -> subscriptions.py:81
['lunaraslan', 'annzehavi', 'lantti', 'cloudmay', 'thaliacerato', 'kitalovexoxo', 'kitalovexoxovip',
'liza_feet4', 'nebula3']
⠼ Getting your subscriptions (this may take awhile)...DEBUG [subscriptions.scrape_subscriptions_active:81] usernames offset 19: usernames retrived -> subscriptions.py:81
['nebula3']
DEBUG [subscriptions.scrape_subscriptions_active:81] usernames offset 9: usernames retrived -> subscriptions.py:81
['hotjulia1992', 'lunaraslan', 'annzehavi', 'lantti', 'cloudmay', 'thaliacerato', 'kitalovexoxo',
'kitalovexoxovip', 'liza_feet4']
DEBUG [subscriptions.get_subscriptions:65] Total active subscriptions found 18 subscriptions.py:65
⠙ Getting your subscriptions (this may take awhile)...DEBUG [subscriptions.scrape_subscriptions_disabled:100] usernames offset 10: usernames retrived -> subscriptions.py:100
['bikers99_free', 'nordichotwifeswe', 'u295570662', 'arielarains321', 'mfoderpage']
⠦ Getting your subscriptions (this may take awhile)...DEBUG [subscriptions.scrape_subscriptions_disabled:100] usernames offset 0: usernames retrived -> subscriptions.py:100
['lewisandlucy', 'jarodppv', 'agam_amiri', 'noga_bloom', 'kim_inbar', 'irenablonde', 'mika_sol',
'abellajade', 'foxiifem']
⠏ Getting your subscriptions (this may take awhile)...DEBUG [subscriptions.scrape_subscriptions_disabled:100] usernames offset 9: usernames retrived -> subscriptions.py:100
['foxiifem', 'bikers99_free', 'nordichotwifeswe', 'u295570662', 'arielarains321', 'mfoderpage']
DEBUG [subscriptions.get_subscriptions:65] Total expired subscriptions found 14 subscriptions.py:65
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
DEBUG [lists.get_list_users:164] users count without Dupes 0 found lists.py:164
DEBUG [lists.get_blacklist:49] Lists found on profile [] lists.py:49
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
DEBUG [lists.get_list_users:164] users count without Dupes 0 found lists.py:164
DEBUG [userselector.filterNSort:82] username count no filters: 32 userselector.py:82
? Which models do you want to scrape
:
? Which models do you want to scrape
: kitalovexoxovip
[scraper.scrape_context_manager:363] scraper.py:363
starting script
[profiles.print_current_profile:138] Using profile: main_profile profiles.py:138
Status - UP
[scraper.normal_post_process:175] Progress 1/1 model scraper.py:175
? Scrape entire paid page [WARNING ONLY USE IF NEEDED i.e for DELETED MODELS] False
? Which area(s) would you like to scrape? ['Profile', 'Timeline', 'Archived', 'Labels']
[profile.print_profile_info:103] Name: Kita | Username: kitalovexoxovip | ID: 315668348 | Joined: profile.py:103
2023-03-15T00:00:00+00:00
-- 8 photos
-- 71 videos
-- 0 audios
[of.process_profile:173] Avatar : of.py:173
https://public.onlyfans.com/files/i/ib/ib4/ib4u0v9xkotbhct3avvre1a5o5q9eucp1678887215/315668348/avatar.jpg
DEBUG [timeline.get_timeline_media:123] Timeline Cache 84 found timeline.py:123
DEBUG [timeline.get_after:233] set initial timeline to last post timeline.py:233
DEBUG [timeline.get_timeline_media:127] setting after for timeline to 1693519089.0 for kitalovexoxovip timeline.py:127
DEBUG [timeline.scrape_timeline_posts:51] 2023-08-31T21:58:09+00:00 timeline.py:51
DEBUG [timeline.scrape_timeline_posts:57] timeline.py:57
https://onlyfans.com/api2/v2/users/315668348/posts?limit=100&order=publish_date_asc&skip_users=all&skip_us
ers_dups=1&afterPublishTime=1693519089.0&pinned=0&format=infinite
DEBUG [timeline.scrape_timeline_posts:69] timestamp:2023-08-31T21:58:09+00:00 -> number of post found 0 timeline.py:69
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
DEBUG [timeline.get_timeline_media:160] Timeline Count with Dupes 0 found timeline.py:160
DEBUG [timeline.get_timeline_media:170] Timeline Count without Dupes 0 found timeline.py:170
DEBUG [of.process_timeline_posts:111] Timeline Media Count with locked 0 of.py:111
DEBUG [of.process_timeline_posts:112] Removing locked timeline media of.py:112
DEBUG [archive.get_archived_media:127] Archived Cache 0 found archive.py:127
DEBUG [archive.get_archived_media:131] setting after for archive to 0 for kitalovexoxovip archive.py:131
DEBUG [archive.scrape_archived_posts:60] archive.py:60
https://onlyfans.com/api2/v2/users/315668348/posts/archived?limit=100&order=publish_date_asc&skip_users=all
&skip_users_dups=1&format=infinite
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
DEBUG [archive.scrape_archived_posts:72] timestamp:1970-01-01T00:00:00+00:00 -> number of post found 0 archive.py:72
DEBUG [archive.get_archived_media:164] Archived Count with Dupes 0 found archive.py:164
DEBUG [archive.get_archived_media:171] Archived Count without Dupes 0 found archive.py:171
DEBUG [of.process_archived_posts:129] Archived Media Count with locked 0 of.py:129
DEBUG [of.process_archived_posts:130] Removing locked archived media of.py:130
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
DEBUG [labels.scrape_labels:91] offset:0 -> labels names found 3 labels.py:91
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯DEBUG [labels.scrape_labels:92] offset:0 -> hasMore value in json False labels.py:92
DEBUG [labels.get_labels:73] Labels name count without Dupes 3 found labels.py:73
DEBUG [labels.scrape_labelled_posts:172] offset:0 -> labelled posts found 1 labels.py:172
DEBUG [labels.scrape_labelled_posts:173] offset:0 -> hasMore value in json False labels.py:173
DEBUG [labels.scrape_labelled_posts:172] offset:0 -> labelled posts found 4 labels.py:172
DEBUG [labels.scrape_labelled_posts:173] offset:0 -> hasMore value in json False labels.py:173
⠧ Getting posts from labels...
DEBUG [labels.scrape_labelled_posts:172] offset:0 -> labelled posts found 15 labels.py:172
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯DEBUG [labels.scrape_labelled_posts:173] offset:0 -> hasMore value in json False labels.py:173
DEBUG [labels.get_labelled_posts:153] Labels count without Dupes 3 found labels.py:153
DEBUG [filters.filterMedia:13] filter 1-> all media no filter count: 29 filters.py:13
[filters.dupefilter:56] Removing duplicate media filters.py:56
DEBUG [filters.filterMedia:17] filter 2-> all media dupe filter count: 29 filters.py:17
DEBUG [filters.filterMedia:20] filter 3-> all media datesort count: 29 filters.py:20
[filters.posts_type_filter:89] filtering Media to images,audios,videos filters.py:89
DEBUG [filters.filterMedia:24] filter 4-> all media post media type filter count: 29 filters.py:24
DEBUG [filters.filterMedia:27] filter 5-> all media post date filter: 29 filters.py:27
DEBUG [filters.filterMedia:30] filter 6-> all media post timed post filter count: 29 filters.py:30
DEBUG [filters.filterMedia:33] filter 7-> all media post text filter count: 29 filters.py:33
DEBUG [filters.filterMedia:36] filter 8-> all download type filter count: 29 filters.py:36
DEBUG [filters.filterMedia:40] filter 9-> mass message filter count: 29 filters.py:40
DEBUG [filters.filterMedia:44] filter 11-> final media count from retrived post: 29 filters.py:44
DEBUG [misc.medialist_filter:19] Number of unique media ids in database for kitalovexoxovip: 86 misc.py:19
DEBUG [misc.medialist_filter:21] Number of new mediaids with dupe ids removed: 0 misc.py:21
? Do you want to continue with script DEBUG [misc.medialist_filter:23] Removed previously downloaded avatars/headers misc.py:23
DEBUG [misc.medialist_filter:24] Final Number of media to download 0 misc.py:24
[misc.download_picker:34] kitalovexoxovip (0 photos, 0 videos, 0 audios, 0 skipped, 0 failed) misc.py:34
[scraper.scrape_context_manager:374] scraper.py:374
===========================
Script Finished
Run Time: 0:00:26
===========================
datawhores commentedon Sep 2, 2023
The script will only scan for new content now
If you update to the latest version you can get more information about this but right now I see that
after is being set automatically to 1693519089.0

You can override this with --after 2000 to fill in the missing downloads
=================================
According to the api she has 0 archived posts
====================================================
Label seems to be working
====================================================
Lastly
The script will now inform you that you need to
run with --after 200 --dupe
if you need want to rescrape the entire timeline + redownload all downloads
You files are already marked as downloaded so they are all skipped
orithecapper commentedon Sep 4, 2023
Okay let me try
orithecapper commentedon Sep 4, 2023
Trying to login attempt:1/10
[scraper.check_auth:340] Auth Failed scraper.py:340
Note: Browser Extractions only works with default browser profile
? Select how to retrive auth information Enter Each Field Manually
You'll need to go to onlyfans.com and retrive header information
Go to https://github.com/datawhores/OF-Scraper and find the section named
'Getting Your Auth Info'
You only need to retrive the x-bc header,the user-agent, and cookie information
? Enter your sess cookie: ESC + Enter to finish input
❯ jv77kg8nvudfjjkjtkfq4e88vl
? Enter your auth_id cookie: ESC + Enter to finish input
❯ 297424175
? Enter your auth_uid cookie (leave blank if you don't use 2FA): ESC + Enter to
❯
? Enter your
user agent: ESC + Enter to finish input❯ Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/117.0
? Enter your
x-bctoken: ESC + Enter to finish input❯ 6ea5d26c7abbd1593419c85656b2782b0a06781d
{'auth': {'app-token': '33d57ade8c02dbc5a333db99ff9ae26a', 'sess':
'jv77kg8nvudfjjkjtkfq4e88vl', 'auth_id': '297424175', 'auth_uid_': '',
'user_agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101
Firefox/117.0', 'x-bc': '6ea5d26c7abbd1593419c85656b2782b0a06781d'}}
Writing to /home/ori/.config/ofscraper/main_profile/auth.json
Welcome, Ori k | im4sad
╭──────────────────────────────────────────────────────────────────────────────╮
╰──────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────╮
╰──────────────────────────────────────────────────────────────────────────────╯
? Which models do you want to scrape
: kitalovexoxovip
[scraper.scrape_context_manager:367] scraper.py:367
starting script
[profiles.print_current_profile:138] Using profile: profiles.py:138
main_profile
Status - UP
[scraper.normal_post_process:177] Progress 1/1 model scraper.py:177
? Scrape entire paid page
[Warning: initial Scan can be slow]
[Caution: You should not need this unless your looking to scrape paid content fr
om a deleted/banned model] False
? Which area(s) would you like to scrape? ['Profile', 'Timeline', 'Pinned', 'Arc
hived', 'Highlights', 'Stories', 'Messages', 'Purchased', 'Labels']
[profile.scrape_profile_helper:49] Attempt 1/10 to get profile profile.py:49
kitalovexoxovip
[profile.print_profile_info:103] Name: Kita | Username: profile.py:103
kitalovexoxovip | ID: 315668348 | Joined:
2023-03-15T00:00:00+00:00
-- 8 photos
-- 77 videos
-- 0 audios
[of.process_profile:178] Avatar : of.py:178
https://public.onlyfans.com/files/i/ib/ib4/ib4u0v9xkotbhct3avvre1a5o5q
9eucp1678887215/315668348/avatar.jpg
╭──────────────────────────────────────────────────────────────────────────────╮
╰──────────────────────────────────────────────────────────────────────────────╯
[timeline.get_timeline_media:128] timeline.py:128
Setting initial timeline scan date for kitalovexoxovip to
2023.08.31
Hint: append ' --after 2000' to command to force scan of entire
timeline + download of new files only
Hint: append ' --after 2000 --dupe' to command to force scan of
entire timeline + download of all files
╭──────────────────────────────────────────────────────────────────────────────╮
╰──────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────╮
╰──────────────────────────────────────────────────────────────────────────────╯
[messages.get_messages:105] messages.py:105
Setting initial message scan date for kitalovexoxovip to
1970.01.01
Hint: append ' --after 2000' to command to force scan of entire
messages + download of new files only
Hint: append ' --after 2000 --dupe' to command to force scan of
entire messages + download of all files
╭──────────────────────────────────────────────────────────────────────────────╮
╰──────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────╮
╰──────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────╮
│ Attempt 1/10 │
╰──────────────────────────────────────────────────────────────────────────────╯
⠋ Getting highlight...
Pages Progress: 0
╭──────────────────────────────────────────────────────────────────────────────╮
╰──────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────╮
╰──────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────╮
╰──────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────╮
╰──────────────────────────────────────────────────────────────────────────────╯
[filters.dupefilter:56] Removing duplicate media filters.py:56
[filters.posts_type_filter:89] filtering Media to filters.py:89
images,audios,videos
[misc.download_picker:39] kitalovexoxovip (0 photos, 0 videos, 0 misc.py:39
audios, 0 skipped, 0 failed)
? Do you want to continue with script [scraper.scrape_context_manager:378] scraper.py:378
Script Finished
Run Time: 0:00:31
datawhores commentedon Sep 4, 2023
What did you run with?
orithecapper commentedon Sep 4, 2023
{
"config": {
"main_profile": "main_profile",
"save_location": "/run/media/ori/Kek/Onlyfans_Data",
"file_size_limit": 0,
"file_size_min": 0,
"dir_format": "{model_username}/{responsetype}/{mediatype}/",
"file_format": "{filename}.{ext}",
"textlength": 0,
"space-replacer": " ",
"date": "MM-DD-YYYY",
"metadata": "{configpath}/{profile}/.data/{model_username}_{model_id}",
"filter": [
"Images",
"Audios",
"Videos"
],
"threads": 7,
"code-execution": false,
"custom": "null",
"mp4decrypt": "/home/ori/.config/ofscraper/bin/mp4decrypt",
"ffmpeg": "/home/ori/.config/ofscraper/bin/ffmpeg",
"discord": "",
"private-key": "",
"client-id": "",
"key-mode-default": "keydb",
"keydb_api": "b4ef65533cd0739e32f1ba34f2e048721fa237587a967e197ac3b8c5e9b63e75",
"dynamic-mode-default": "deviint",
"partfileclean": true,
"backend": "httpx",
"download-sems": 6,
"maxfile-sem": 0,
"downloadbars": true,
"cache-mode": "sqlite",
"responsetype": {
"timeline": "Posts",
"message": "Messages",
"archived": "Archived",
"paid": "Messages",
"stories": "Stories",
"highlights": "Stories",
"profile": "Profile",
"pinned": "Posts"
}
}
}
orithecapper commentedon Sep 4, 2023
ofscraper_main_profile_2023-09-04.log
datawhores commentedon Sep 5, 2023
I mean how did you start the script what did you put into
your commandline
orithecapper commentedon Sep 5, 2023
ofscrapper
orithecapper commentedon Sep 5, 2023
Look whats your email I will give you my password and username
just dont use it to buy stuff lol
datawhores commentedon Sep 5, 2023
I've already explained why this won't work
you need to run
ofscraper --after 2000 --dupe
read my previous post for why
orithecapper commentedon Sep 10, 2023
Downloads]$ ofscraper --after 2000 --dupe
bro I am telling you it still wont work!!!
orithecapper commentedon Sep 10, 2023
other models it works tho
datawhores commentedon Sep 10, 2023
Can you re-explain
Also what exactly are you trying to do.
Put your logs here if need be
https://privatebin.info/directory/
or similar
Having them together is hard to decipher
orithecapper commentedon Sep 17, 2023
Hey man want me to screenshare? I can add you on discord :)