Advanced Usage

Advanced usage: General

this is non-comprehensive

I am always changing and adding little things. The best way to learn is just to look around. If you think a shortcut should probably do something, try it out! If you can't find something, let me know and I'll try to add it!

advanced mode

To avoid confusing clutter, several advanced menu items and buttons are hidden by default. When you are comfortable with the program, hit help->advanced mode to reveal them!

searching with wildcards

The autocomplete tag dropdown supports wildcard searching with '*'.

The '*' will match any number of characters. Every normal autocomplete search has a secret '*' on the end that you don't see, which is how full words get matched from you only typing in a few letters.

This is useful when you can only remember part of a word, or can't spell part of it. You can put '*' characters anywhere, but you should experiment to get used to the exact way these searches work. Some results can be surprising!

You can select the special predicate inserted at the top of your autocomplete results (the highlighted '*gelion' and '*va*ge*' above). It will return all files that match that wildcard, i.e. every file for every other tag in the dropdown list.

This is particularly useful if you have a number of files with commonly structured over-informationed tags, like this:

In this case, selecting the 'title:cool pic*' predicate will return all three images in the same search, where you can conveniently give them some more-easily searched tags like 'series:cool pic' and 'page:1', 'page:2', 'page:3'.

exclude deleted files

In the client's options is a checkbox to exclude deleted files. It recurs pretty much anywhere you can import, under 'import file options'. If you select this, any file you ever deleted will be excluded from all future remote searches and import operations. This can stop you from importing/downloading and filtering out the same bad files several times over. The default is off. You may wish to have it set one way most of the time, but switch it the other just for one specific import or search.

inputting non-english lanuages

If you typically use an IME to input Japanese or another non-english language, you may have encountered problems entering into the autocomplete tag entry control in that you need Up/Down/Enter to navigate the IME, but the autocomplete steals those key presses away to navigate the list of results. To fix this, press Insert to temporarily disable the autocomplete's key event capture. The autocomplete text box will change colour to let you know it has released its normal key capture. Use your IME to get the text you want, then hit Insert again to restore the autocomplete to normal behaviour.

tag display

If you do not like a particular tag or namespace, you can easily hide it with services->manage tag display:

This image is out of date, sorry!

You can exclude single tags, like as shown above, or entire namespaces (enter the colon, like 'species:'), or all namespaced tags (use ':'), or all unnamespaced tags (''). 'all known tags' will be applied to everything, as well as any repository-specific rules you set.

A blacklist excludes whatever is listed; a whitelist excludes whatever is not listed.

This censorship is local to your client. No one else will experience your changes or know what you have censored.

importing and adding tags at the same time

Add tags before importing on file->import files lets you give tags to the files you import en masse, and intelligently, using regexes that parse filename:

This should be somewhat self-explanatory to anyone familiar with regexes. I hate them, personally, but I recognise they are powerful and exactly the right tool to use in this case. This is a good introduction.

Once you are done, you'll get something neat like this:

Which you can more easily manage by collecting:

Collections have a small icon in the bottom left corner. Selecting them actually selects many files (see the status bar), and performing an action on them (like archiving, uploading) will do so to every file in the collection. Viewing collections fullscreen pages through their contents just like an uncollected search.

Here is a particularly zoomed out view, after importing volume 2:

Importing with tags is great for long-running series with well-formatted filenames, and will save you literally hours' finicky tagging.

tag migration

At some point I will write some better help for this system, which is powerful. Be careful with it!

Sometimes, you may wish to move thousands or millions of tags from one place to another. These actions are now collected in one place: services->tag migration.

It proceeds from left to right, reading data from the source and applying it to the destination with the certain action. There are multiple filters available to select which sorts of tag mappings or siblings or parents will be selected from the source. The source and destination can be the same, for instance if you wanted to delete all 'clothing:' tags from a service, you would pull all those tags and then apply the 'delete' action on the same service.

You can import from and export to Hydrus Tag Archives (HTAs), which are external, portable .db files. In this way, you can move millions of tags between two hydrus clients, or share with a friend, or import from an HTA put together from a website scrape.

Tag Migration is a powerful system. Be very careful with it. Do small experiments before starting large jobs, and if you intend to migrate millions of tags, make a backup of your db beforehand, just in case it goes wrong.

This system was once much more simple, but it still had HTA support. If you wish to play around with some HTAs, there are some old user-created ones here.

custom shortcuts

Once you are comfortable with manually setting tags and ratings, you may be interested in setting some shortcuts to do it quicker. Try hitting file->shortcuts or clicking the keyboard icon on any media viewer window's top hover window.

There are two kinds of shortcuts in the program--reserved, which have fixed names, are undeletable, and are always active in certain contexts (related to their name), and custom, which you create and name and edit and are only active in a media viewer when you want them to. You can redefine some simple shortcut commands, but most importantly, you can create shortcuts for adding/removing a tag or setting/unsetting a rating.

Use the same 'keyboard' icon to set the current and default custom shortcuts.

finding duplicates

system:similar_to lets you run the duplicates processing page's searches manually. You can either insert the hash and hamming distance manually, or you can launch these searches automatically from the thumbnail right-click->find similar files menu. For example:

truncated/malformed file import errors

Some files, even though they seem ok in another program, will not import to hydrus. This is usually because they file has some 'truncated' or broken data, probably due to a bad upload or storage at some point in its internet history. While sophisticated external programs can usually patch the error (often rendering the bottom lines of a jpeg as grey, for instance), hydrus is not so clever. Please feel free to send or link me, hydrus developer, to these files, so I can check them out on my end and try to fix support.

If the file is one you particularly care about, the easiest solution is to open it in photoshop or gimp and save it again. Those programs should be clever enough to parse the file's weirdness, and then make a nice clean saved file when it exports. That new file should be importable to hydrus.

setting a password

the client offers a very simple password system, enough to keep out noobs. You can set it at database->set a password. It will thereafter ask for the password every time you start the program, and will not open without it. However none of the database is encrypted, and someone with enough enthusiasm or a tool and access to your computer can still very easily see what files you have. The password is mainly to stop idle snoops checking your images if you are away from your machine.

Advanced usage: Tag Siblings

quick version

Tag siblings let you replace a bad tag with a better tag.

what's the problem?

Reasonable people often use different words for the same things.

A great example is in Japanese names, which are natively written surname first. character:ayanami rei and character:rei ayanami have the same meaning, but different users will use one, or the other, or even both.

Other examples are tiny syntactic changes, common misspellings, and unique acronyms:

A particular repository may have a preferred standard, but it is not easy to guarantee that all the users will know exactly which tag to upload or search for.

After some time, you get this:

Without continual intervention by janitors or other experienced users to make sure y⊇x (i.e. making the yellow circle entirely overlap the blue by manually giving y to everything with x), searches can only return x (blue circle) or y (yellow circle) or x∩y (the lens-shaped overlap). What we really want is x∪y (both circles).

So, how do we fix this problem?

tag siblings

Let's define a relationship, A->B, that means that any time we would normally see or use tag A or tag B, we will instead only get tag B:

Note that this relationship implies that B is in some way 'better' than A.

ok, I understand; now confuse me

This relationship is transitive, which means as well as saying A->B, you can also say B->C, which implies A->C and B->C.

You can also have an A->C and B->C that does not include A->B.

The outcome of these two arrangements is the same (everything ends up as C), but the underlying semantics are a little different if you ever want to edit them.

Many complicated arrangements are possible:

Note that if you say A->B, you cannot say A->C; the left-hand side can only go to one. The right-hand side can receive many. The client will stop you from constructing loops.

how you do it

Just open services->manage tag siblings, and add a few.

The client will automatically collapse the tagspace to whatever you set. It'll even work with autocomplete, like so:

Please note that siblings' autocomplete counts may be slightly inaccurate, as unioning the count is difficult to quickly estimate.

The client will not collapse siblings anywhere you 'write' tags, such as the manage tags dialog. You will be able to add or remove A as normal, but it will be written in some form of "A (B)" to let you know that, ultimately, the tag will end up displaying in the main gui as B:

Although the client may present A as B, it will secretly remember A! You can remove the association A->B, and everything will return to how it was. No information is lost at any point.

remote siblings

Whenever you add or remove a tag sibling pair to a tag repository, you will have to supply a reason (like when you petition a tag). A janitor will review this petition, and will approve or deny it. If it is approved, all users who synchronise with that tag repository will gain that sibling pair. If it is denied, only you will see it.

Advanced usage: Tag Parents

quick version

Tag parents let you automatically add a particular tag every time another tag is added. The relationship will also apply retroactively.

what's the problem?

Tags often fall into certain heirarchies. Certain tags always imply certain other tags, and it is annoying and time-consuming to add them all individually every time.

For example, whenever you tag a file with ak-47, you probably also want to tag it assault rifle, and maybe even firearm as well.

Another time, you might tag a file character:eddard stark, and then also have to type in house stark and then series:game of thrones. (you might also think series:game of thrones should actually be series:a song of ice and fire, but that is an issue for siblings)

Drawing more relationships would make a significantly more complicated venn diagram, so let's draw a family tree instead:

tag parents

Let's define the child-parent relationship 'C->P' as saying that tag P is the semantic superset/superclass of tag C. All files that have C should also have P, without exception. When the user tries to add tag C to a file, tag P is added automatically.

Let's expand our weapon example:

In that graph, adding ar-15 to a file would also add semi-automatic riflerifle, and firearm. Searching for handgun would return everything with m1911 and smith and wesson model 10.

This can obviously get as complicated and autistic as you like, but be careful of being too confident--this is just a fun example, but is an AK-47 truly always an assault rifle? Some people would say no, and beyond its own intellectual neatness, what is the purpose of attempting to create such a complicated and 'perfect' tree? Of course you can create any sort of parent tags on your local tags or your own tag repositories, but this sort of thing can easily lead to arguments between reasonable people. I only mean to say, as someone who does a lot of tag work, to try not to create anything 'perfect', as it usually ends up wasting time. Act from need, not toward purpose.

how you do it

Go to services->manage tag parents:

Which looks and works just like the manage tag siblings dialog.

Note that when you hit ok, the client will look up all the files with all your added tag Cs and retroactively apply/pend the respective tag Ps if needed. This could mean thousands of tags!

Once you have some relationships added, the parents and grandparents will show indented anywhere you 'write' tags, such as the manage tags dialog:

Hitting enter on cersei will try to add house lannister and series:game of thrones as well.

remote parents

Whenever you add or remove a tag parent pair to a tag repository, you will have to supply a reason (like when you petition a tag). A janitor will review this petition, and will approve or deny it. If it is approved, all users who synchronise with that tag repository will gain that parent pair. If it is denied, only you will see it.

Database Migration

the hydrus database

A hydrus client consists of three components:

  1. the software installation

    This is the part that comes with the installer or extract release, with the executable and dlls and a handful of resource folders. It doesn't store any of your settings--it just knows how to present a database as a nice application. If you just run the client executable straight, it looks in its 'db' subdirectory for a database, and if one is not found, it creates a new one. If it sees a database running at a lower version than itself, it will update the database before booting it.

    It doesn't really matter where you put this. An SSD will load it marginally quicker the first time, but you probably won't notice. If you run it without command-line parameters, it will try to write to its own directory (to create the initial database), so if you mean to run it like that, it should not be in a protected place like Program Files.

  2. the actual database

    The client stores all its preferences and current state and knowledge about files--like file size and resolution, tags, ratings, inbox status, and so on and so on--in a handful of SQLite database files, defaulting to install_dir/db. Depending on the size of your client, these might total 1MB in size or be as much as 10GB.

    In order to perform a search or to fetch or process tags, the client has to interact with these files in many small bursts, which means it is best if these files are on a drive with low latency. An SSD is ideal, but a regularly-defragged HDD with a reasonable amount of free space also works well.

  3. your media files

    All of your jpegs and webms and so on (and their thumbnails) are stored in a single complicated directory that is by default at install_dir/db/client_files. All the files are named by their hash and stored in efficient hash-based subdirectories. In general, it is not navigable by humans, but it works very well for the fast access from a giant pool of files the client needs to do to manage your media.

    Thumbnails tend to be fetched dozens at a time, so it is, again, ideal if they are stored on an SSD. Your regular media files--which on many clients total hundreds of GB--are usually fetched one at a time for human consumption and do not benefit from the expensive low-latency of an SSD. They are best stored on a cheap HDD, and, if desired, also work well across a network file system.

these components can be put on different drives

Although an initial install will keep these parts together, it is possible to, say, run the database on a fast drive but keep your media in cheap slow storage. This is an excellent arrangement that works for many users. And if you have a very large collection, you can even spread your files across multiple drives. It is not very technically difficult, but I do not recommend it for new users.

Backing such an arrangement up is obviously more complicated, and the internal client backup is not sophisticated enough to capture everything, so I recommend you figure out a broader solution with a third-party backup program like FreeFileSync.

pulling your media apart

As always, I recommend creating a backup before you try any of this, just in case it goes wrong.

If you would like to move your files and thumbnails to new locations, I generally recommend you not move their folders around yourself--the database has an internal knowledge of where it thinks its file and thumbnail folders are, and if you move them while it is closed, it will become confused and you will have to manually relocate what is missing on the next boot via a repair dialog. This is not impossible to figure out, but if the program's 'client files' folder confuses you at all, I'd recommend you stay away. Instead, you can simply do it through the gui:

Go database->migrate database, giving you this dialog:

This is an image from my old laptop's client. At that time, I had moved the main database and its files out of the install directory but otherwise kept everything together. Your situation may be simpler or more complicated.

To move your files somewhere else, add the new location, empty/remove the old location, and then click 'move files now'.

Portable means that the path is beneath the main db dir and so is stored as a relative path. Portable paths will still function if the database changes location between boots (for instance, if you run the client from a USB drive and it mounts under a different location).

Weight means the relative amount of media you would like to store in that location. It only matters if you are spreading your files across multiple locations. If location A has a weight of 1 and B has a weight of 2, A will get approximately one third of your files and B will get approximately two thirds.

The operations on this dialog are simple and atomic--at no point is your db ever invalid. Once you have the locations and ideal usage set how you like, hit the 'move files now' button to actually shuffle your files around. It will take some time to finish, but you can pause and resume it later if the job is large or you want to undo or alter something.

If you decide to move your actual database, the program will have to shut down first. Before you boot up again, you will have to create a new program shortcut:

informing the software that the database is not in the default location

A straight call to the client executable will look for a database in install_dir/db. If one is not found, it will create one. So, if you move your database and then try to run the client again, it will try to create a new empty database in the previous location!

So, pass it a -d or --db_dir command line argument, like so:

And it will instead use the given path. If no database is found, it will similarly create a new empty one at that location. You can use any path that is valid in your system, but I would not advise using network locations and so on, as the database works best with some clever device locking calls these interfaces may not provide.

Rather than typing the path out in a terminal every time you want to launch your external database, create a new shortcut with the argument in. Something like this, which is from my main development computer and tests that a fresh default install will run an existing database ok:

Note that an install with an 'external' database no longer needs access to write to its own path, so you can store it anywhere you like, including protected read-only locations (e.g. in 'Program Files'). If you do move it, just double-check your shortcuts are still good and you are done.

finally

If your database now lives in one or more new locations, make sure to update your backup routine to follow them!

moving to an SSD

As an example, let's say you started using the hydrus client on your HDD, and now you have an SSD available and would like to move your thumbnails and main install to that SSD to speed up the client. Your database will be valid and functional at every stage of this, and it can all be undone. The basic steps are:

  1. Move your 'fast' files to the fast location.
  2. Move your 'slow' files out of the main install directory.
  3. Move the install and db itself to the fast location and update shortcuts.

Specifically:

You should now have something like this:

p.s. running multiple clients

Since you now know how to tell the software about an external database, you can, if you like, run multiple clients from the same install (and if you previously had multiple install folders, now you can now just use the one). Just make multiple shortcuts to the same client executable but with different database directories. They can run at the same time. You'll save yourself a little memory and update-hassle. I do this on my laptop client to run a regular client for my media and a separate 'admin' client to do PTR petitions and so on.

Program Launch Arguments

launch arguments

You can launch the program with several different arguments to alter core behaviour. If you are not familiar with this, you are essentially putting additional text after the launch command that runs the program. You can run this straight from a terminal console (usually good to test with), or you can bundle it into an easy shortcut that you only have to double-click. An example of a launch command with arguments:

C:\Hydrus Network\client.exe -d="E:\hydrus db" --no_db_temp_files

You can also add --help to your program path, like this:

client.py --help
server.exe --help
./server --help

Which gives you a full listing of all below arguments, however this will not work with the built client executables, which are bundled as a non-console programs and will not give you text results to any console they are launched from. As client.exe is the most commonly run version of the program, here is the list, with some more help about each command:

The server supports the same arguments. It also takes a positional argument of 'start' (start the server, the default), 'stop' (stop any existing server), or 'restart' (do a stop, then a start), which should go before any of the above arguments.

Client API

client api

The hydrus client now supports a very simple API so you can access it with external programs.

By default, the Client API is not turned on. Go to services->manage services and give it a port to get it started. I recommend you not allow non-local connections (i.e. only requests from the same computer will work) to start with.

The Client API should start immediately. It will only be active while the client is open. To test it is running all correct (and assuming you used the default port of 45869), try loading this:

http://127.0.0.1:45869

You should get a welcome page. By default, the Client API is HTTP, which means it is ok for communication on the same computer or across your home network (e.g. your computer's web browser talking to your computer's hydrus), but not secure for transmission across the internet (e.g. your phone to your home computer). You can turn on HTTPS, but due to technical complexities it will give itself a self-signed 'certificate', so the security is good but imperfect, and whatever is talking to it (e.g. your web browser looking at https://127.0.0.1:45869) may need to add an exception.

The Client API is still experimental and sometimes not user friendly. If you want to talk to your home computer across the internet, you will need some networking experience. You'll need a static IP or reverse proxy service or dynamic domain solution like no-ip.org so your device can locate it, and potentially port-forwarding on your router to expose the port. If you have a way of hosting a domain and have a signed certificate (e.g. from Let's Encrypt), you can overwrite the client.crt and client.key files in your 'db' directory and HTTPS hydrus should host with those.

Once the API is running, go to its entry in services->review services. Each external program trying to access the API will need its own access key, which is the familiar 64-character hexadecimal used in many places in hydrus. You can enter the details manually from the review services panel and then copy/paste the key to your external program, or the program may have the ability to request its own access while a mini-dialog launched from the review services panel waits to catch the request.

Browsers and tools created by hydrus users:

Library modules created by hydrus users:

API

On 200 OK, the API returns JSON for everything except actual file/thumbnail requests. On 4XX and 5XX, assume it will return plain text, sometimes a raw traceback. You'll typically get 400 for a missing parameter, 401/403/419 for missing/insufficient/expired access, and 500 for a real deal serverside error.

Access and permissions

The client gives access to its API through different 'access keys', which are the typical 64-character hex used in many other places across hydrus. Each guarantees different permissions such as handling files or tags. Most of the time, a user will provide full access, but do not assume this. If the access header or parameter is not provided, you will get 401, and all insufficient permission problems will return 403 with appropriate error text.

Access is required for every request. You can provide this as an http header, like so:

Or you can include it as a GET or POST parameter on any request (except POST /add_files/add_file, which uses the entire POST body for the file's bytes). Use the same name for your GET or POST argument, such as:

There is now a simple 'session' system, where you can get a temporary key that gives the same access without having to include the permanent access key in every request. You can fetch a session key with the /session_key command and thereafter use it just as you would an access key, just with Hydrus-Client-API-Session-Key instead.

Session keys will expire if they are not used within 24 hours, or if the client is restarted, or if the underlying access key is deleted. An invalid/expired session key will give a 419 result with an appropriate error text.

Bear in mind the Client API is still under construction and is http-only for the moment--be careful about transmitting sensitive content outside of localhost. The access key will be unencrypted across any connection, and if it is included as a GET parameter, as simple and convenient as that is, it could be cached in all sorts of places.

Access Management

GET /api_version

Gets the current API version. I will increment this every time I alter the API.

GET /request_new_permissions

Register a new external program with the client. This requires the 'add from api request' mini-dialog under services->review services to be open, otherwise it will 403.

GET /session_key

Get a new session key.

GET /verify_access_key

Check your access key is valid.

Adding Files

POST /add_files/add_file

Tell the client to import a file.

  • Restricted access: YES. Import Files permission needed.

  • Required Headers:

    • Content-Type : application/json (if sending path), application/octet-stream (if sending file)
  • Arguments (in JSON):

path : (the path you want to import)

POST /add_files/delete_files

Tell the client to send files to the trash.

POST /add_files/undelete_files

Tell the client to pull files back out of the trash.

POST /add_files/archive_files

Tell the client to archive inboxed files.

POST /add_files/unarchive_files

Tell the client re-inbox archived files.

Adding Tags

GET /add_tags/clean_tags

Ask the client about how it will see certain tags.

GET /add_tags/get_tag_services

Ask the client about its tag services.

POST /add_tags/add_tags

Make changes to the tags that files have.

  • Restricted access: YES. Add Tags permission needed.

  • Required Headers: n/a

  • Arguments (in JSON):

    • hash : (an SHA256 hash for a file in 64 characters of hexadecimal)
    • hashes : (a list of SHA256 hashes)
    • service_names_to_tags : (an Object of service names to lists of tags to be 'added' to the files)
    • service_names_to_actions_to_tags : (an Object of service names to content update actions to lists of tags)
    • add_siblings_and_parents : obsolete, now does nothing

You can use either 'hash' or 'hashes', and you can use either the simple add-only 'service_names_to_tags' or the advanced 'service_names_to_actions_to_tags'.

The service names are as in the /add_tags/get_tag_services call.

The permitted 'actions' are:

    • 0 - Add to a local tag service.
    • 1 - Delete from a local tag service.
    • 2 - Pend to a tag repository.
    • 3 - Rescind a pend from a tag repository.
    • 4 - Petition from a tag repository. (This is special)
    • 5 - Rescind a petition from a tag repository.

When you petition a tag from a repository, a 'reason' for the petition is typically needed. If you send a normal list of tags here, a default reason of "Petitioned from API" will be given. If you want to set your own reason, you can instead give a list of [ tag, reason ] pairs.

Some example requests:

Adding some tags to a file:

{
	"hash" : "df2a7b286d21329fc496e3aa8b8a08b67bb1747ca32749acb3f5d544cbfc0f56",
	"service_names_to_tags" : {
		"my tags" : [ "character:supergirl", "rating:safe" ]
	}
}

Adding more tags to two files:

{
	"hashes" : [ "df2a7b286d21329fc496e3aa8b8a08b67bb1747ca32749acb3f5d544cbfc0f56", "f2b022214e711e9a11e2fcec71bfd524f10f0be40c250737a7861a5ddd3faebf" ],
	"service_names_to_tags" : {
		"my tags" : [ "process this" ],
		"public tag repository" : [ "creator:dandon fuga" ]
	}
}

A complicated transaction with all possible actions:

{
	"hash" : "df2a7b286d21329fc496e3aa8b8a08b67bb1747ca32749acb3f5d544cbfc0f56",
	"service_names_to_actions_to_tags" : {
		"my tags" : {
			"0" : [ "character:supergirl", "rating:safe" ],
			"1" : [ "character:superman" ]
		},
		"public tag repository" : {
			"2" : [ "character:supergirl", "rating:safe" ],
			"3" : [ "filename:image.jpg" ],
			"4" : [ [ "creator:danban faga", "typo" ], [ "character:super_girl", "underscore" ] ]
			"5" : [ "skirt" ]
		}
	}
}

This last example is far more complicated than you will usually see. Pend rescinds and petition rescinds are not common. Petitions are also quite rare, and gathering a good petition reason for each tag is often a pain.

Note that the enumerated status keys in the service_names_to_actions_to_tags structure are strings, not ints (JSON does not support int keys for Objects).

Response description: 200 and no content.

Note also that hydrus tag actions are safely idempotent. You can pend a tag that is already pended and not worry about an error--it will be discarded. The same for other reasonable logical scenarios: deleting a tag that does not exist will silently make no change, pending a tag that is already 'current' will again be passed over. It is fine to just throw 'process this' tags at every file import you add and not have to worry about checking which files you already added it to.

Adding URLs

GET /add_urls/get_url_files

Ask the client about an URL's files.

GET /add_urls/get_url_info

Ask the client for information about a URL.

POST /add_urls/add_url

Tell the client to 'import' a URL. This triggers the exact same routine as drag-and-dropping a text URL onto the main client window.

  • Restricted access: YES. Import URLs permission needed. Add Tags needed to include tags.

  • Required Headers:

    • Content-Type : application/json
  • Arguments (in JSON):

    • url : (the url you want to add)
    • destination_page_key : (optional page identifier for the page to receive the url)
    • destination_page_name : (optional page name to receive the url)
    • show_destination_page : (optional, defaulting to false, controls whether the UI will change pages on add)
    • service_names_to_additional_tags : (optional tags to give to any files imported from this url)
    • filterable_tags : (optional tags to be filtered by any tag import options that applies to the URL)
    • service_names_to_tags : (obsolete, legacy synonym for service_names_to_additional_tags)

If you specify a destination_page_name and an appropriate importer page already exists with that name, that page will be used. Otherwise, a new page with that name will be recreated (and used by subsequent calls with that name). Make sure it that page name is unique (e.g. '/b/ threads', not 'watcher') in your client, or it may not be found.

Alternately, destination_page_key defines exactly which page should be used. Bear in mind this page key is only valid to the current session (they are regenerated on client reset or session reload), so you must figure out which one you want using the /manage_pages/get_pages call. If the correct page_key is not found, or the page it corresponds to is of the incorrect type, the standard page selection/creation rules will apply.

show_destination_page defaults to False to reduce flicker when adding many URLs to different pages quickly. If you turn it on, the client will behave like a URL drag and drop and select the final page the URL ends up on.

service_names_to_additional_tags uses the same data structure as for /add_tags/add_tags. You will need 'add tags' permission, or this will 403. These tags work exactly as 'additional' tags work in a tag import options. They are service specific, and always added unless some advanced tag import options checkbox (like 'only add tags to new files') is set.

filterable_tags works like the tags parsed by a hydrus downloader. It is just a list of strings. They have no inherant service and will be sent to a tag import options, if one exists, to decide which tag services get what. This parameter is useful if you are pulling all a URL's tags outside of hydrus and want to have them processed like any other downloader, rather than figuring out service names and namespace filtering on your end. Note that in order for a tag import options to kick in, I think you will have to have a Post URL URL Class hydrus-side set up for the URL so some tag import options (whether that is Class-specific or just the default) can be loaded at import time.

POST /add_urls/associate_url

Manage which URLs the client considers to be associated with which files.

  • Restricted access: YES. Import URLs permission needed.

  • Required Headers:

    • Content-Type : application/json
  • Arguments (in JSON):

    • url_to_add : (an url you want to associate with the file(s))
    • urls_to_add : (a list of urls you want to associate with the file(s))
    • url_to_delete : (an url you want to disassociate from the file(s))
    • urls_to_delete : (a list of urls you want to disassociate from the file(s))
    • hash : (an SHA256 hash for a file in 64 characters of hexadecimal)
    • hashes : (a list of SHA256 hashes)

All of these are optional, but you obviously need to have at least one of 'url' arguments and one of the 'hash' arguments. The single/multiple arguments work the same--just use whatever is convenient for you. Unless you really know what you are doing with URL Classes, I strongly recommend you stick to associating URLs with just one single 'hash' at a time. Multiple hashes pointing to the same URL is unusual and frequently unhelpful.

Managing Cookies

This refers to the cookies held in the client's session manager, which are sent with network requests to different domains.

GET /manage_cookies/get_cookies

Get the cookies for a particular domain.

  • Restricted access: YES. Manage Cookies permission needed.

  • Required Headers: n/a

  • Arguments: domain

  • Example request (for gelbooru.com):

    • /manage_cookies/get_cookies?domain=gelbooru.com

Response description: A JSON Object listing all the cookies for that domain in [ name, value, domain, path, expires ] format.

  • Example response:

    • {
      	"cookies" : [
      		[ "__cfduid", "f1bef65041e54e93110a883360bc7e71", ".gelbooru.com", "/", 1596223327 ],
      		[ "pass_hash", "0b0833b797f108e340b315bc5463c324", "gelbooru.com", "/", 1585855361 ],
      		[ "user_id", "123456", "gelbooru.com", "/", 1585855361 ]
      	]
      }

Note that these variables are all strings except 'expires', which is either an integer timestamp or null for session cookies.

This request will also return any cookies for subdomains. The session system in hydrus generally stores cookies according to the second-level domain, so if you request for specific.someoverbooru.net, you will still get the cookies for someoverbooru.net and all its subdomains.

POST /manage_cookies/set_cookies

Set some new cookies for the client. This makes it easier to 'copy' a login from a web browser or similar to hydrus if hydrus's login system can't handle the site yet.

  • Restricted access: YES. Manage Cookies permission needed.

  • Required Headers:

    • Content-Type : application/json
  • Arguments (in JSON):

    • cookies : (a list of cookie rows in the same format as the GET request above)
  • Example request body:

    • {
      	"cookies" : [
      		[ "PHPSESSID", "07669eb2a1a6e840e498bb6e0799f3fb", ".somesite.com", "/", 1627327719 ],
      		[ "tag_filter", "1", ".somesite.com", "/", 1627327719 ]
      	]
      }

You can set 'value' to be null, which will clear any existing cookie with the corresponding name, domain, and path (acting essentially as a delete).

Expires can be null, but session cookies will time-out in hydrus after 60 minutes of non-use.

Managing Pages

This refers to the pages of the main client UI.

GET /manage_pages/get_pages

Get the page structure of the current UI session.

  • Restricted access: YES. Manage Pages permission needed.

  • Required Headers: n/a

  • Arguments: n/a

Response description: A JSON Object of the top-level page 'notebook' (page of pages) detailing its basic information and current sub-pages. Page of pages beneath it will list their own sub-page lists.

  • Example response:

    • {
      	"pages" : {
      		"name" : "top pages notebook",
      		"page_key" : "3b28d8a59ec61834325eb6275d9df012860a1ecfd9e1246423059bc47fb6d5bd",
      		"page_type" : 10,
      		"selected" : true,
      		"pages" : [
      			{
      				"name" : "files",
      				"page_key" : "d436ff5109215199913705eb9a7669d8a6b67c52e41c3b42904db083255ca84d",
      				"page_type" : 6,
      				"selected" : false
      			},
      			{
      				"name" : "thread watcher",
      				"page_key" : "40887fa327edca01e1d69b533dddba4681b2c43e0b4ebee0576177852e8c32e7",
      				"page_type" : 9,
      				"selected" : false
      			},
      			{
      				"name" : "pages",
      				"page_key" : "2ee7fa4058e1e23f2bd9e915cdf9347ae90902a8622d6559ba019a83a785c4dc",
      				"page_type" : 10,
      				"selected" : true,
      				"pages" : [
      					{
      						"name" : "urls",
      						"page_key" : "9fe22cb760d9ee6de32575ed9f27b76b4c215179cf843d3f9044efeeca98411f",
      						"page_type" : 7,
      						"selected" : true
      					},
      					{
      						"name" : "files",
      						"page_key" : "2977d57fc9c588be783727bcd54225d577b44e8aa2f91e365a3eb3c3f580dc4e",
      						"page_type" : 6,
      						"selected" : false
      					}
      				]
      			}	
      		]
      	}
      }

The page types are as follows:

The top page of pages will always be there, and always selected. 'selected' means which page is currently in view and will propagate down other page of pages until it terminates. It may terminate in an empty page of pages, so do not assume it will end on a 'media' page.

The 'page_key' is a unique identifier for the page. It will stay the same for a particular page throughout the session, but new ones are generated on a client restart or other session reload.

GET /manage_pages/get_page_info

Get information about a specific page.

This is under construction. The current call dumps a ton of info for different downloader pages. Please experiment in IRL situations and give feedback for now! I will flesh out this help with more enumeration info and examples as this gets nailed down. POST commands to alter pages (adding, removing, highlighting), will come later.

  • Restricted access: YES. Manage Pages permission needed.

  • Required Headers: n/a

  • Arguments:

    • page_key : (hexadecimal page_key as stated in /manage_pages/get_pages)
    • simple : true or false (optional, defaulting to true)
  • Example request:

    • /manage_pages/get_page_info?page_key=aebbf4b594e6986bddf1eeb0b5846a1e6bc4e07088e517aff166f1aeb1c3c9da&simple=true

Response description: A JSON Object of the page's information. At present, this mostly means downloader information.

POST /manage_pages/focus_page

'Show' a page in the main GUI, making it the current page in view. If it is already the current page, no change is made.

  • Restricted access: YES. Manage Pages permission needed.

  • Required Headers:

    • Content-Type : application/json
  • Arguments (in JSON):

    • page_key : (the page key for the page you wish to show)

The page key is the same as fetched in the /manage_pages/get_pages call.

Searching Files

File search in hydrus is not paginated like a booru--all searches return all results in one go. In order to keep this fast, search is split into two steps--fetching file identifiers with a search, and then fetching file metadata in batches. You may have noticed that the client itself performs searches like this--thinking a bit about a search and then bundling results in batches of 256 files before eventually throwing all the thumbnails on screen.

GET /get_files/search_files

Search for the client's files.

  • Restricted access: YES. Search for Files permission needed. Additional search permission limits may apply.

  • Required Headers: n/a

  • Arguments (in percent-encoded JSON):

    • tags : (a list of tags you wish to search for)
    • system_inbox : true or false (optional, defaulting to false)
    • system_archive : true or false (optional, defaulting to false)
  • Example request for all files in the inbox with tags "blue eyes", "blonde hair", and "кино":

    • /get_files/search_files?system_inbox=true&tags=%5B%22blue%20eyes%22%2C%20%22blonde%20hair%22%2C%20%22%5Cu043a%5Cu0438%5Cu043d%5Cu043e%22%5D

If the access key's permissions only permit search for certain tags, at least one whitelisted/non-blacklisted tag must be in the "tags" list or this will 403. Tags can be prepended with a hyphen to make a negated tag (e.g. "-green eyes"), but these will not be eligible for the permissions whitelist check.

Response description: The full list of numerical file ids that match the search.

  • Example response:

    • {
      	"file_ids" : [ 125462, 4852415, 123, 591415 ]
      }

File ids are internal and specific to an individual client. For a client, a file with hash H always has the same file id N, but two clients will have different ideas about which N goes with which H. They are a bit faster than hashes to retrieve and search with en masse, which is why they are exposed here.

The search will be performed on the 'local files' file domain and 'all known tags' tag domain. At current, they will be sorted in import time order, newest to oldest (if you would like to paginate them before fetching metadata), but sort options will expand in future.

Note that most clients will have an invisible system:limit of 10,000 files on all queries. I expect to add more system predicates to help searching for untagged files, but it is tricky to fetch all files under any circumstance. Large queries may take several seconds to respond.

GET /get_files/file_metadata

Get metadata about files in the client.

  • Restricted access: YES. Search for Files permission needed. Additional search permission limits may apply.

  • Required Headers: n/a

  • Arguments (in percent-encoded JSON):

    • file_ids : (a list of numerical file ids)
    • hashes : (a list of hexadecimal SHA256 hashes)
    • only_return_identifiers : true or false (optional, defaulting to false)
    • detailed_url_information : true or false (optional, defaulting to false)

You need one of file_ids or hashes. If your access key is restricted by tag, you cannot search by hashes, and the file_ids you search for must have been in the most recent search result.

  • Example request for two files with ids 123 and 4567:

    • /get_files/file_metadata?file_ids=%5B123%2C%204567%5D

  • The same, but only wants hashes back:

    • /get_files/file_metadata?file_ids=%5B123%2C%204567%5D&only_return_identifiers=true

  • And one that fetches two hashes, 4c77267f93415de0bc33b7725b8c331a809a924084bee03ab2f5fae1c6019eb2 and 3e7cb9044fe81bda0d7a84b5cb781cba4e255e4871cba6ae8ecd8207850d5b82:

    • /get_files/file_metadata?hashes=%5B%224c77267f93415de0bc33b7725b8c331a809a924084bee03ab2f5fae1c6019eb2%22%2C%20%223e7cb9044fe81bda0d7a84b5cb781cba4e255e4871cba6ae8ecd8207850d5b82%22%5D

This request string can obviously get pretty ridiculously long. It also takes a bit of time to fetch metadata from the database. In its normal searches, the client usually fetches file metadata in batches of 256.

Response description: A list of JSON Objects that store a variety of file metadata.

  • Example response:

    • {
      	"metadata" : [
      		{
      			"file_id" : 123,
      			"hash" : "4c77267f93415de0bc33b7725b8c331a809a924084bee03ab2f5fae1c6019eb2",
      			"size" : 63405,
      			"mime" : "image/jpg",
      			"ext" : ".jpg",
      			"width" : 640,
      			"height" : 480,
      			"duration" : null,
      			"has_audio" : false,
      			"num_frames" : null,
      			"num_words" : null,
      			"is_inbox" : true,
      			"is_local" : true,
      			"is_trashed" : false,
      			"known_urls" : [],
      			"service_names_to_statuses_to_tags" : {}
      			"service_names_to_statuses_to_display_tags" : {}
      		},
      		{
      			"file_id" : 4567,
      			"hash" : "3e7cb9044fe81bda0d7a84b5cb781cba4e255e4871cba6ae8ecd8207850d5b82",
      			"size" : 199713,
      			"mime" : "video/webm",
      			"ext" : ".webm",
      			"width" : 1920,
      			"height" : 1080,
      			"duration" : 4040,
      			"has_audio" : true,
      			"num_frames" : 102,
      			"num_words" : null,
      			"is_inbox" : false,
      			"is_local" : true,
      			"is_trashed" : false,
      			"known_urls" : [
      				"https://gelbooru.com/index.php?page=post&s=view&id=4841557",
      				"https://img2.gelbooru.com//images/80/c8/80c8646b4a49395fb36c805f316c49a9.jpg",
      				"http://origin-orig.deviantart.net/ed31/f/2019/210/7/8/beachqueen_samus_by_dandonfuga-ddcu1xg.jpg"
      			],
      			"service_names_to_statuses_to_tags" : {
      				"my tags" : {
      					"0" : [ "favourites" ]
      					"2" : [ "process this later" ]
      				},
      				"my tag repository" : {
      					"0" : [ "blonde_hair", "blue_eyes", "looking_at_viewer" ]
      					"1" : [ "bodysuit" ]
      				}
      			},
      			"service_names_to_statuses_to_display_tags" : {
      				"my tags" : {
      					"0" : [ "favourites" ]
      					"2" : [ "process this later", "processing" ]
      				},
      				"my tag repository" : {
      					"0" : [ "blonde hair", "blue eyes", "looking at viewer" ]
      					"1" : [ "bodysuit", "clothing" ]
      				}
      			}
      		}
      	]
      }

    And one where only_return_identifiers is true:

    • {
      	"metadata" : [
      		{
      			"file_id" : 123,
      			"hash" : "4c77267f93415de0bc33b7725b8c331a809a924084bee03ab2f5fae1c6019eb2"
      		},
      		{
      			"file_id" : 4567,
      			"hash" : "3e7cb9044fe81bda0d7a84b5cb781cba4e255e4871cba6ae8ecd8207850d5b82"
      		}
      	]
      }

Size is in bytes. Duration is in milliseconds, and may be an int or a float.

The service_names_to_statuses_to_tags structures are similar to the /add_tags/add_tags scheme, excepting that the status numbers are:

    • 0 - current
    • 1 - pending
    • 2 - deleted
    • 3 - petitioned

Note that since JSON Object keys must be strings, these status numbers are strings, not ints.

While service_names_to_statuses_to_tags represents the actual tags stored on the database for a file, the service_names_to_statuses_to_display_tags structure reflects how tags appear in the UI, after siblings are collapsed and parents are added. If you want to edit a file's tags, use service_names_to_statuses_to_tags. If you want to render to the user, use service_names_to_statuses_to_displayed_tags.

If you add detailed_url_information=true, a new entry, 'detailed_known_urls', will be added for each file, with a list of the same structure as /add_urls/get_url_info. This may be an expensive request if you are querying thousands of files at once.

For example:

GET /get_files/file

Get a file.

  • Restricted access: YES. Search for Files permission needed. Additional search permission limits may apply.

  • Required Headers: n/a

  • Arguments :

    • file_id : (numerical file id for the file)
    • hash : (a hexadecimal SHA256 hash for the file)

Only use one. As with metadata fetching, you may only use the hash argument if you have access to all files. If you are tag-restricted, you will have to use a file_id in the last search you ran.

GET /get_files/thumbnail

Get a file's thumbnail.

  • Restricted access: YES. Search for Files permission needed. Additional search permission limits may apply.

  • Required Headers: n/a

  • Arguments :

    • file_id : (numerical file id for the file)
    • hash : (a hexadecimal SHA256 hash for the file)

Only use one. As with metadata fetching, you may only use the hash argument if you have access to all files. If you are tag-restricted, you will have to use a file_id in the last search you ran.

IPFS

ipfs

IPFS is a p2p protocol that makes it easy to share many sorts of data. The hydrus client can communicate with an IPFS daemon to send and receive files.

You can read more about IPFS from their homepage, or this guide that explains its various rules in more detail.

For our purposes, we only need to know about these concepts:

getting ipfs

Get the prebuilt executable here. Inside should be a very simple 'ipfs' executable that does everything. Extract it somewhere and open up a terminal in the same folder, and then type:

The IPFS exe should now be running in that terminal, ready to respond to requests:

You can kill it with Ctrl+C and restart it with the 'ipfs daemon' call again (you only have to run 'ipfs init' once).

When it is running, opening this page should download and display an example 'Hello World!' file from ~~~across the internet~~~.

Your daemon listens for other instances of ipfs using port 4001, so if you know how to open that port in your firewall and router, make sure you do.

connecting your client

IPFS daemons are treated as services inside hydrus, so go to services->manage services->remote->ipfs daemons and add in your information. Hydrus uses the API port, default 5001, so you will probably want to use credentials of '127.0.0.1:5001'. You can click 'test credentials' to make sure everything is working.

Thereafter, you will get the option to 'pin' and 'unpin' from a thumbnail's right-click menu, like so:

This works like hydrus's repository uploads--it won't happen immediately, but instead will be queued up at the pending menu. Commit all your pins when you are ready:

Notice how the IPFS icon appears on your pending and pinned files. You can search for these files using 'system:file service'.

Unpin works the same as pin, just like a hydrus repository petition.

Right-clicking any pinned file will give you a new 'share' action:

Which will put it straight in your clipboard. In this case, it is QmP6BNvWfkNf74bY3q1ohtDZ9gAmss4LAjuFhqpDPQNm1S.

If you want to share a pinned file with someone, you have to tell them this multihash. They can then:

directories

If you have many files to share, IPFS also supports directories, and now hydrus does as well. IPFS directories use the same sorts of multihash as files, and you can download them into the hydrus client using the same pages->new download popup->an ipfs multihash menu entry. The client will detect the multihash represents a directory and give you a simple selection dialog:

You may recognise those hash filenames--this example was created by hydrus, which can create ipfs directories from any selection of files from the same right-click menu:

Hydrus will pin all the files and then wrap them in a directory, showing its progress in a popup. Your current directory shares are summarised on the respective services->review services panel:

If you find you use IPFS a lot, here are some add-ons for your web browser, as recommended by /tech/:

This script changes all bare ipfs hashes into clickable links to the ipfs gateway (on page loads):

https://greasyfork.org/en/scripts/14837-ipfs-hash-linker

These redirect all gateway links to your local daemon when it's on, it works well with the previous script:

https://github.com/lidel/ipfs-firefox-addon

https://github.com/dylanPowers/ipfs-chrome-extension

The Local Booru

This was a fun project, but it never advanced beyond a prototype. The future of this system is other people's nice applications plugging into the Client API.

local booru

The hydrus client has a simple booru to help you share your files with others over the internet.

First of all, this is hosted from your client, which means other people will be connecting to your computer and fetching files you choose to share from your hard drive. If you close your client or shut your computer down, the local booru will no longer work.

how to do it

First of all, turn the local booru server on by going to services->manage services and giving it a port:

It doesn't matter what you pick, but make it something fairly high. When you ok that dialog, the client should start the booru. You may get a firewall warning.

Then right click some files you want to share and select share->local booru. This will throw up a small dialog, like so:

This lets you enter an optional name, which titles the share and helps you keep track of it, an optional text, which lets you say some words or html to the people you are sharing with, and an expiry, which lets you determine if and when the share will no longer work.

You can also copy either the internal or external link to your clipboard. The internal link (usually starting something like http://127.0.0.1:45866/) works inside your network and is great just for testing, while the external link (starting http://[your external ip address]:[external port]/) will work for anyone around the world, as long as your booru's port is being forwarded correctly.

If you use a dynamic-ip service like No-IP, you can replace your external IP with your redirect hostname. You have to do it by hand right now, but I'll add a way to do it automatically in future.

Note that anyone with the external link will be able to see your share, so make sure you only share links with people you trust.

forwarding your port

Your home router acts as a barrier between the computers inside the network and the internet. Those inside can see out, but outsiders can only see what you tell the router to permit. Since you want to let people connect to your computer, you need to tell the router to forward all requests of a certain kind to your computer, and thus your client.

If you have never done this before, it can be a headache, especially doing it manually. Luckily, a technology called UPnP makes it a ton easier, and this is how your Skype or Bittorrent clients do it automatically. Not all routers support it, but most do. You can have hydrus try to open a port this way back on services->manage services. Unless you know what you are doing and have a good reason to make them different, you might as well keep the internal and external ports the same.

Once you have it set up, the client will try to make sure your router keeps that port open for your client. If it all works, you should see the new mapping appear in your services->manage local upnp dialog, which lists all your router's current port mappings.

If you want to test that the port forward is set up correctly, going to http://[external ip]:[external port]/ should give a little html just saying hello. Your ISP might not allow you to talk to yourself, though, so ask a friend to try if you are having trouble.

If you still do not understand what is going on here, this is a good article explaining everything.

If you do not like UPnP or your router does not support it, you can set the port forward up manually, but I encourage you to keep the internal and external port the same, because absent a 'upnp port' option, the 'copy external share link' button will use the internal port.

so, what do you get?

The html layout is very simple:



It uses a very similar stylesheet to these help pages. If you would like to change the style, have a look at the html and then edit install_dir/static/local_booru_style.css. The thumbnails will be the same size as in your client.

editing an existing share

You can review all your shares on services->review services, under local->booru. You can copy the links again, change the title/text/expiration, and delete any shares you don't want any more.

future plans

This was a fun project, but it never advanced beyond a prototype. The future of this system is other people's nice applications plugging into the Client API.

Setting up your own Server

You do not need the server to do anything with hydrus! It is only for advanced users to do very specific jobs! The server is also hacked-together and quite technical. It requires a fair amount of experience with the client and its concepts, and it does not operate on a timescale that works well on a LAN. Only try running your own server once you have a bit of experience synchronising with something like the PTR and you think, 'Hey, I know exactly what that does, and I would like one!'

Here is a document put together by a user describing whether you want the server.

setting up a server

I will use two terms, server and service, to mean two distinct things:

Setting up a hydrus server is easy compared to, say, Apache. There are no .conf files to mess about with, and everything is controlled through the client. When started, the server will place an icon in your system tray in Windows or open a small frame in Linux or macOS. To close the server, either right-click the system tray icon and select exit, or just close the frame.

The basic process for setting up a server is:

Let's look at these steps in more detail:

start the server

Since the server and client have so much common code, I package them together. If you have the client, you have the server. If you installed in Windows, you can hit the shortcut in your start menu. Otherwise, go straight to 'server' or 'server.exe' or 'server.pyw' in your installation directory. The program will first try to take port 45870 for its administration interface, so make sure that is free. Open your firewall as appropriate.

_client

set up the client

In the services->manage services dialog, add a new 'hydrus server administration service' and set up the basic options as appropriate. If you are running the server on the same computer as the client, its hostname is 'localhost'.

In order to set up the first admin account and an access key, use 'init' as a registration key. This special registration key will only work to initialise this first super-account.

YOU'LL WANT TO SAVE YOUR ACCESS KEY IN A SAFE PLACE

If you lose your admin access key, there is no way to get it back, and if you are not sqlite-proficient, you'll have to restart from the beginning by deleting your server's database files.

If the client can't connect to the server, it is either not running or you have a firewall/port-mapping problem. If you want a quick way to test the server's visibility, just put https://host:port into your browser (make sure it is https! http will not work)--if it is working, your browser will probably complain about its self-signed https certificate. Once you add a certificate exception, the server should return some simple html identifying itself.

set up the server

You should have a new submenu, 'administrate services', under 'services', in the client gui. This is where you control most server and service-wide stuff.

admin->your server->manage services lets you add, edit, and delete the services your server runs. Every time you add one, you will also be added as that service's first administrator, and the admin menu will gain a new entry for it.

making accounts

Go admin->your service->create new accounts to create new registration keys. Send the registration keys to the users you want to give these new accounts. A registration key will only work once, so if you want to give several people the same account, they will have to share the access key amongst themselves once one of them has registered the account. (Or you can register the account yourself and send them all the same access key. Do what you like!)

Go admin->manage account types to add, remove, or edit account types. Make sure everyone has at least downloader (get_data) permissions so they can stay synchronised.

You can create as many accounts of whatever kind you like. Depending on your usage scenario, you may want to have all uploaders, one uploader and many downloaders, or just a single administrator. There are many combinations.

???

The most important part is to have fun! There are no losers on the INFORMATION SUPERHIGHWAY.

profit

I honestly hope you can get some benefit out of my code, whether just as a backup or as part of a far more complex system. Please mail me your comments as I am always keen to make improvements.

btw, how to backup a repo's db

All of a server's files and options are stored in its accompanying .db file and respective subdirectories, which are created on first startup (just like with the client). To backup or restore, you have two options:

OMG EVERYTHING WENT WRONG

If you get to a point where you can no longer boot the repository, try running SQLite Studio and opening server.db. If the issue is simple--like manually changing the port number--you may be in luck. Send me an email if it is tricky.

Remember that everything is breaking all the time. Make regular backups, and you'll minimise your problems.

running a client or server in wine

getting it to work on wine

Several Linux and macOS users have found success running hydrus with Wine. Here is a post from a Linux dude:

Some things I picked up on after extended use:

Installation process:

If you get the client running in Wine, please let me know how you get on!

running a client or server from source

running from source

I write the client and server entirely in python, which can run straight from source. It is not simple to get hydrus running this way, but if none of the built packages work for you (for instance you use a non-Ubuntu-compatible flavour of Linux), it may be the only way you can get the program to run. Also, if you have a general interest in exploring the code or wish to otherwise modify the program, you will obviously need to do this stuff.

a quick note about Linux flavours

I often point people here when they are running non-Ubuntu flavours of Linux and cannot run my build. One Debian user mentioned that he had an error like this:

 

 

But that by simply deleting the libX11.so.6 file in the hydrus install directory, he was able to boot. I presume this meant my hydrus build was then relying on his local libX11.so, which happened to have better API compatibility. If you receive a similar error, you might like to try the same sort of thing. Let me know if you discover anything!

building on windows

Installing some packages on windows with pip may need Visual Studio's C++ Build Tools for your version of python. Although these tools are free, it can be a pain to get them through the official (and often huge) downloader installer from Microsoft. Instead, install Chocolatey and use this one simple line:

choco install -y vcbuildtools visualstudio2017buildtools

Trust me, this will save a ton of headaches!

what you will need

You will need basic python experience, python 3.x and a number of python modules. Most of it you can get through pip.

If you are on Linux or macOS, or if you are on Windows and have an existing python you do not want to stomp all over with new modules, I recommend you create a virtual environment:

Note, if you are on Linux, it may be easier to use your package manager instead of messing around with venv. A user has written a great summary with all needed packages here.

If you do want to create a new venv environment:

That '. venv/bin/activate' line turns your venv on, and will be needed every time you run the client.pyw/server.py files. You can easily tuck it into a launch script.

On Windows, the path is venv\Scripts\activate, and the whole deal is done much easier in cmd than Powershell. If you get Powershell by default, just type 'cmd' to get an old fashioned command line. In cmd, the launch command is just 'venv\scripts\activate', no leading period.

After that, you can go nuts with pip. I think this will do for most systems:

You may want to do all that in smaller batches.

You will also need Qt5. Either PySide2 (default) or PyQt5 are supported, through qtpy. You can install, again, with pip:

-or-

Qt 5.15 currently seems to be working well, but 5.14 caused some trouble.

And optionally, you can add these packages:

Here is a masterline with everything for general use:

For Windows, depending on which compiler you are using, pip can have problems building some modules like lz4 and lxml. This page has a lot of prebuilt binaries--I have found it very helpful many times. You may want to update python's sqlite3.dll as well--you can get it here, and just drop it in C:\Python37\DLLs or wherever you have python installed. I have a fair bit of experience with Windows python, so send me a mail if you need help.

If you don't have ffmpeg in your PATH and you want to import videos, you will need to put a static FFMPEG executable in the install_dir/bin directory. Have a look at how I do it in the extractable compiled releases if you can't figure it out. On Windows, you can copy the exe from one of those releases, or just download the latest static build right from the FFMPEG site.

Once you have everything set up, client.pyw and server.py should look for and run off client.db and server.db just like the executables. They will look in the 'db' directory by default, or anywhere you point them with the "-d" parameter, again just like the executables.

I develop hydrus on and am most experienced with Windows, so the program is more stable and reasonable on that. I do not have as much experience with Linux or macOS, so I would particularly appreciate your Linux/macOS bug reports and any informed suggestions.

my code

Unlike most software people, I am more INFJ than INTP/J. My coding style is unusual and unprofessional, and everything is pretty much hacked together. Please look through the source if you are interested in how things work and ask me if you don't understand something. I'm constantly throwing new code together and then cleaning and overhauling it down the line.

I work strictly alone, so while I am very interested in detailed bug reports or suggestions for good libraries to use, I am not looking for pull requests. Everything I do is WTFPL, so feel free to fork and play around with things on your end as much as you like.