Danbooru 2 upgrade on Jan 5, 2013

I'm planning on finally upgrading this site to the new code on January 5, 2013. This site will run in read only mode during the migration period (which should last a day) and after some testing (another day) I'll flip the switch and make the new site live.

If you haven't paid much attention to Danbooru 2, I've scaled back most of my more loftier ambitions and the new site is much closer to what we have now. You can kick the tires and give it a run at http://testbooru.donmai.us.

Is the API at least going to remain the same? It doesn't seem to exist on Testbooru.

The API will be deprecated but should still work. What resource are you trying to access?

This is super late but could you add a second sort when you do a sort by count on the tag list?

Currently if you sort by count it sorts by count then if you change pages it reshuffles those. If it sorted by count then alphabetically it would overcome this issue.

If it's too tricky to implement that's fine but it would would be a significant improvement for the few people who sort by count.

There are several issues with Danbooru 2 that weren't resolved back when it still was in an active development/refinement phase. Should we bring them now or wait for the deployment of the new version?

Also, is it better to write about them here or make a new "issue" at Github, or maybe both?

github would be the more likely place for actual bugs.

albert said:
The API will be deprecated but should still work. What resource are you trying to access?

iqdb uses the XML API for its updates. If it's deprecated, will there be a replacement eventually?

It would be good to test the danbooru2 API on testbooru, is it possible to enable it there?

I consider the current list of open bugs on Github to be minor enough to wait after deployment. If there are any in particular that you think should be prioritized post about them here.

http://testbooru.donmai.us/post/index.xml still works for me. Remember you have to pass in your login credentials.

The new resource is at http://testbooru.donmai.us/posts.xml

It's pretty hard to judge if things are working, when the majority of samples and thumbs are missing. At the very least, the flash thumbnails are all broken, ditto for some samples of posts which *do* have data, such as this one.

One immediately-striking regression is the paginator. Bring back the prev/next page ([<<] and [>>]) buttons, and unless it's a gigantic win for the DB performance (which I submit we won't be able to judge until after the real-life deployment anyway), the buttons for last pages too.

Next page located in a stable position with a fixed, page-unique text is incredibly important for fast, keyboard- and muscle memory-friendly navigation.

Knowing how many pages of results there are is quintessential for any kind of serious moderation or tagging work, and I can't imagine working without knowing whether I'm facing 5 or 500 pages of work. Since the current codebase can handle paginating-until-end, the new one should be able to as well, or if it can't, then it's simply not ready for deployment. Mystery meat moderation will result in a dramatic drop in the volume and quality of tagging work, because it means we won't be able to judge whether queries are tight enough, and whether we simply can spend enough time and energy to do X, and you want to start from the last page if you do. It will also make it next to impossible to discuss any non-trivial tagging change, because judging the impact of any multiple-tag scenario is not longer viable. Caching performance as mentioned in bug 332 is a nice goal, but things being fast will be useless if tagging is too painful to do.

I too would like to keep the page browser at the bottom the same (though the last time I brought this up I was told to get a script to replace it).

i support the page browsing system to remain the same as well. like the japanese letter person said (stuck on my wii, can't copy/paste -_-), it makes tagging easier. i wouldn't bother with tagging and tag clean up at all if i didn't know what i was in for in terms of page numbers.

also, i couldn't find where to make pools so i'll ask here, will users be able to delete pools that they made or will pool deletion remain mod and up only? i think it was at one point users could delete their own and janitors could delete pools though im not 100% on the last one.

Edit: thank you log. I remembered some some complaining about that a while back but i didn't know if that was resolved or not. appearantly its still going.

I can't delete pools so probably mods and up.

The Testbooru server has a limited amount of disk space so I couldn't copy over every image. It's sufficient to know that it works for 100 or so.

I can add the arrows back.

The last page link currently doesn't even work for tags with 1,000+ pages (or 20,000 posts). I'm assuming people don't care about large searches like this. What are the search sizes people are working with? 10 pages? 50? 100? In any case I still consider this a minor view change and I can handle it after deployment.

Being able to know how many results you got for your search is too useful to be left out of the UI.

A little note that tells me how many posts are in the list I'm looking, above or below the search bar depending on space, would give us everything we needed.

Optionally, a "last page" button on the end of the page numbers would help make the UI feel more complete. If the issue with displaying a link to the last page is generating the last page, then this saves the database calculating the page every search.

albert said:
The last page link currently doesn't even work for tags with 1,000+ pages (or 20,000 posts). I'm assuming people don't care about large searches like this. What are the search sizes people are working with? 10 pages? 50? 100? In any case I still consider this a minor view change and I can handle it after deployment.

If it's above the size the paginator cuts off at, it's entirely reasonable to remove the link, it won't be useful anyway. Otherwise, there's no good limit, because different things are "searches people work with" at different times. If I'm doing a quick fix, >10 pages might be too much for me to be able to do it during my morning coffee. If I'm judging the impact of a tagging change, 100 might be "alright" but 300 "too much", and I need to know these sizes. And in several instances, we've actually had brave souls go through 500+ pages of results and retag/review things. The limit of 1000 is already a natural boundary, and introducing ones below that will be harmful to important work.

Serlo said:
Optionally, a "last page" button on the end of the page numbers would help make the UI feel more complete. If the issue with displaying a link to the last page is generating the last page, then this saves the database calculating the page every search.

It's all about knowing the number of pages, and if the DB counts until the last page, getting numbers before that is free. So that wouldn't be a real fix for either the DB or the UI.

Zekana said:
like the japanese letter person said (stuck on my wii, can't copy/paste -_-)

Tee hee, that gave me a laugh. Sorry about my annoying user name, but you can always say "Hazuki" if you can't type moonrunes.

albert said:
The API will be deprecated but should still work. What resource are you trying to access?

...Deprecated? I use the API heavily, so even just deprecated doesn't sound good at all.

Last time I tried post/index.json, it didn't work, but now it does; I haven't tried the other resources.

*messed up old post; deleted*

EDIT: artists.xml seemed to be responding, but how to you access the artist's url fields which seemed to be missing?

EDIT2: since github is accessible again, i was reminded the correct way of accessing notes and comments were notes.xml?group_by=note and comments.xml?group_by=comment respectively. however, how to access the wiki and correct way of accessing the artist api parameters (urls)?

EDIT3: oh, thanks!

Question : is there any system to be implemented in Danbooru2 preventing users from uploading prohibited artists or just giving them a hint that it's a prohibited one before they upload such piece of artwork ?

ghostrigger said:
*messed up old post; deleted*

EDIT: artists.xml seemed to be responding, but how to you access the artist's url fields which seemed to be missing?

EDIT2: since github is accessible again, i was reminded the correct way of accessing notes and comments were notes.xml?group_by=note and comments.xml?group_by=comment respectively. however, how to access the wiki and correct way of accessing the artist api parameters (urls)?

Artist URLs should be fixed. By the way, all API endpoints also return JSON results (artists.json in this case) which I think is much easier to parse.

The wiki pages are accessible at /wiki_pages.json.

The Danbooru downloader should still work.

I've fixed the complaints about the paginator. The last page is now linked provided it's less than 1,000 pages.

Could you populate the help:API page the Site Map links to?

I have to admit, I have not had time to pay much attention to Danbooru 2 before now, but looking at it now it seems pretty straightforward. I rather like the tag list calling out art/copy/char tags before the general tags too.

Question about the batch alias/implication import. Before, it would redirect me to the jobs page and show me what I added, but this time there's no redirect. I went to the jobs page manually and the aliases I submitted weren't there, and aren't in the aliases list either. What's the best way for me to quickly tell if aliases or implication I've submitted went through?

If the wiki is going to be an alternate to the posts instead of tags shouldn't the whole article be displayed?

jxh2154 said:
Question about the batch alias/implication import. Before, it would redirect me to the jobs page and show me what I added, but this time there's no redirect. I went to the jobs page manually and the aliases I submitted weren't there, and aren't in the aliases list either. What's the best way for me to quickly tell if aliases or implication I've submitted went through?

Yeah, sorry that's not clear. Your aliases were created and they do create jobs, but the queue is much faster now and it will start processing almost instantly. I guess the best way is to just search for the alias or implication. If the job errored out it will say so in the job listing.