Do you need help? That's okay, everyone needs help sometimes. You can get real time help via chat on
But before you go there and post, be sure your request for help is the best it can be. Clearly describe the problem and briefly describe your setup, including things like your OS/distribution, version of .NET, version of Readarr, download client and its version. If you are using Docker please run through Docker Guide first as that will solve common and frequent path/permissions issues. Otherwise please have a docker compose handy. How to Generate a Docker Compose Tell us about what you've tried already, what you've looked at. Use the Logging and Log Files section to turn your logging up to trace, recreate the issue, pastebin the relevant context and include a link to it in your post. Maybe even include some screen shots to highlight the issue.
The more we know, the easier it is to help you.
It is likely beneficial to also review the Common Troubleshooting problems:
If you're asked for debug logs your logs will contain debug
and if you're asked for trace logs your logs will contain trace
. If the logs you are providing do not contain either then they are not the logs requested.
To provide good and useful logs for sharing:
Ensure a spammy task is NOT running such as an RSS refresh
Warnings:
Important Note:
grep -inr -C 100 -e 'Shooter' /path/to/logs/*.trace*.txt
If your Appdata Directory is in your home folder then you'd run: grep -inr -C 100 -e 'Shooter' /home/$User/.config/logs/*.trace*.txt
* The flags have the following functions
* -i: ignore case
* -n: show line number
* -r: recursively check all files in the path
* -C: provide # of lines before and after the line it is found on
* -e: the pattern to search for
The log files are located in Readarr's Appdata Directory, inside the logs/ folder. You can also access the log files from the UI at System => Logs => Files.
Note: The Logs ("Events") Table in the UI is not the same as the log files and isn't as useful. If you're asked for logs, please copy/paste from the log files and not the table.
The update log files are located in Readarr's Appdata Directory, inside the UpdateLogs/ folder.
The logs can be long and hard to read as part of a forum or Reddit post and they're spammy in Discord, so please use Pastebin, Hastebin, Gist, 0bin, or any other similar pastebin site. The whole file typically isn't needed, just a good amount of context from before and after the issue/error. Do not forget to wait for spammy tasks like an RSS sync or library refresh to finish.
You can change the log level at Settings => General => Logging. Readarr does not need to restarted for the change to take effect. This change only affects the log files, not the logging database. The latest debug/trace log files are named readarr.debug.txt
and readarr.trace.txt
respectively.
If you're unable to access the UI to set the logging level you can do so by editing config.xml in the AppData directory by setting the LogLevel value to Debug or Trace instead of Info.
<Config>
[...]
<LogLevel>debug</LogLevel>
[...]
</Config>
You can clear log files and the logs database directly from the UI, under System => Logs => Files and System => Logs => Delete (Trash Can Icon)
Readarr uses rolling log files limited to 1MB each. The current log file is always ,readarr.txt
, for the the other files readarr.0.txt
is the next newest (the higher the number the older it is). This log file contains fatal
, error
, warn
, and info
entries.
When Debug log level is enabled, additional readarr.debug.txt
rolling log files will be present. This log files contains fatal
, error
, warn
, info
, and debug
entries. It usually covers a 40h period.
When Trace log level is enabled, additional readarr.trace.txt
rolling log files will be present. This log files contains fatal
, error
, warn
, info
, debug
, and trace
entries. Due to trace verbosity it only covers a couple of hours at most.
We do everything we can to prevent issues when upgrading, but if they do occur this will walk you through the steps to take to recover your installation.
/tmp
directory and deleted critical *Arr files during the upgrade thus causing both the upgrade and rollback to fail. In this case, simply reinstall in-place over the existing borked installation.14-2-4 18:56:49.5|Info|MigrationLogger|\*\*\* 36: update\_with\_quality\_converters migrating \*\*\*
14-2-4 18:56:49.6|Error|MigrationLogger|SQL logic error or missing database duplicate column name: Items
While Processing: "ALTER TABLE "QualityProfiles" ADD COLUMN "Items" TEXT"
Permissions issues are due to the application being unable to access the the relevant temporary folders and/or the app binary folder. Fix the permissions so the user/group the application runs as has the appropriate access.
Synology users may encounter this Synology bug Access to the path '/proc/{some number}/maps is denied
Synology users may also encounter being out of space in /tmp
on certain NASes. You'll need to specify a different /tmp
path for the app. See the SynoCommunity or other Synology support channels for help with this.
In the event of a migration issue there is not much you can do immediately, if the issue is specific to you (or there are not yet any posts), please create a post on our subreddit or swing by our discord, if there are others with the same issue, then rest assured we are working on it.
Please ensure you did not try to use a database from
nightly
on the stable version. Branch hopping is ill-advised.
Fix the permissions to ensure the user/group the application is running as can access (read and write) to both /tmp
and the installation directory of the application.
For Synology users experiencing issues with /proc/###/maps
stopping Sonarr or the other *Arr applications and updating should resolve this. This is an issue with the SynoCommunity package.
Grab the latest release from our website.
Install the update (.exe) or extract (.zip) the contents over your existing installation and re-run Readarr as you normally would.
Downloading and importing is where most people experience issues. From a high level perspective, Readarr needs to be able to communicate with your download client and have access to the files it downloads. There is a large variety of supported download clients and an even bigger variety of setups. This means that while there are some common setups, there isn’t one right setup and everyone’s setup can be a little different.
The first step is to turn logging up to Trace, see Logging and Log Files for details on adjusting logging and searching logs. You’ll then reproduce the issue and use the trace level logs from that time frame to examine the issue. If someone is helping you, put context from before/after in a pastebin, Gist, or similar site to show them. It doesn’t need to be the whole file and it shouldn’t just be the error. You should also reproduce the issue while tasks that spam the log file aren’t running.
When you reach out for help, be sure to read asking for help so that you can provide us with the details we’ll need.
Ensure your download client(s) are running. Start by testing the download client, if it doesn’t work you’ll be able to see details in the trace level logs. You should find a URL you can put into your browser and see if it works. It could be a connection problem, which could indicate a wrong ip, hostname, port or even a firewall blocking access. It might be obvious, like an authentication problem where you’ve gotten the username, password or apikey wrong.
Now we’ll try a download, pick a book and do a manual search. Pick one of those files and attempt to download it. Does it get sent to the download client? Does it end up with the correct category? Does it show up in Activity? Does it end up in the trace level logs during the Check For Finished Download task which runs roughly every minute? Does it get correctly parsed during that task? Does the queued up download have a reasonable name? Since searches by are by id on some indexers/trackers, it can queue one up with a name that it can’t recognize.
Import issues should almost always manifest as an item in Activity with an orange icon you can hover to see the error. If they’re not showing up in Activity, this is the issue you need to focus on first so go back and figure that out. Most import errors are permissions issues, remember that needs to be able to read and write in the download folder. Sometimes, permissions in the library folder can be at fault too, so be sure to check both.
Incorrect path issues are possible too, though less common in normal setups. The key to understanding path issues is knowing that gets the path to the download from the download client, via its API. This becomes a problem in more unique use cases, like the download client running on a different system (maybe even OS!). It can also occur in a Docker setup, when volumes are not done well. A remote path map is a good solution where you don’t have control, like a seedbox setup. On a Docker setup, fixing the paths is a better option.
Below are some common problems.
When Readarr imports, it imports in order of your priorities in your quality profile, regardless of whether they are checked or not. To resolve this issue, you need to drag your checked formats to the top of the quality list. For example, in the options below, even though only EPUB is wanted, if the download has an AZW3 in it along with the EPUB, it will get imported with priority over the EPUB, causing unwanted formats to be imported.
Readarr talks to you download client via it's API and accesses it via the client's webui. You must ensure the client's webui is enabled and the port it is using does not conflict with any other client ports in use or ports in use on your system.
Ensure SSL encryption is not turned on if you're using both your instance and your download client on a local network. See the SSL FAQ entry for more information.
The default user for a Windows service is LocalService
which typically doesn’t have access to your shares. Edit the service and set it up to run as your own user, see the FAQ entry why can’t see my files on a remote server for details.
While mapped network drives like X:\
are convenient, they aren’t as reliable as UNC paths like \\server\share
and they’re also not available before login. Setup and your download client(s) so that they use UNC paths as needed. If your library is on a share, you’d make sure your root folders are using UNC paths. If your download client sends to a share, that is where you’ll need to configure UNC paths since gets the download path from the download client. It is fine to keep your mapped network drives to use yourself, just don’t use them for automation.
Docker adds another layer of complexity that is easy to get wrong, but still end up with a setup that functions, but has various problems. Instead of going over them here, read this wiki article for these automation software and Docker which is all about user, group, ownership, permissions and paths. It isn’t specific to any Docker system, instead it goes over things at a high level so that you can implement them in your own environment.
If you have Readarr in Docker and the Download Client in non-Docker (or vice versa) or have the programs on different servers then you may need a remote path map.
Logs will look like
2022-02-03 14:03:54.3|Error|DownloadedBooksImportService|Import failed, path does not exist or is not accessible by Readarr: /volume3/data/torrents/audiobooks/Party of Two - Jasmine Guillory.mp3. Ensure the path exists and the user running Readarr has the correct permissions to access this file/folder
Thus /volume3/data
does not exist within Readarr's container or is not accessible.
If both *Arr and your Download Client are Docker Containers it is rare a remote path map is needed. It is suggested you review the Docker Guide and/or follow TRaSH's Tutorial
Logs will look like
2022-02-28 18:51:01.1|Error|DownloadedBooksImportService|Import failed, path does not exist or is not accessible by Readarr: /data/media/books/Jasmine Guillory/Party of Two - Jasmine Guillory.mp3. Ensure the path exists and the user running Readarr has the correct permissions to access this file/folder
Don’t forget to check permissions and ownership of the destination. It is easy to get fixated on the download’s ownership and permissions and that is usually the cause of permissions related issues, but it could be the destination as well. Check that the destination folder(s) exist. Check that a destination file doesn’t already exist or can’t be deleted or moved to recycle bin. Check that ownership and permissions allow the downloaded file to be copied, hard linked or moved. The user or group that runs as needs to be able to read and write the root folder.
For Windows Users this may be due to running as a service:
For Synology Users refer to SynoCommunity's Permissions Article for their Packages
Non-Windows: If you're using an NFS mount ensure nolock
is enabled.
If you're using an SMB mount ensure nobrl
is enabled.
Logs will look like
2022-02-28 18:51:01.1|Error|DownloadedBooksImportService|Import failed, path does not exist or is not accessible by Readarr: /data/torrents/books/Party of Two - Jasmine Guillory.mp3. Ensure the path exists and the user running Readarr has the correct permissions to access this file/folder
Don’t forget to check permissions and ownership of the source. It is easy to get fixated on the destination's ownership and permissions and that is a possible cause of permissions related issues, but it typically is the source. Check that the source folder(s) exist. Check that ownership and permissions allow the downloaded file to be copied/hardlinked or copy+delete/moved. The user or group that runs as needs to be able to read and write the downloads folder.
For Windows Users this may be due to running as a service:
For Synology Users refer to SynoCommunity's Permissions Article for their Packages
Non-Windows: If you're using an NFS mount ensure nolock
is enabled.
If you're using an SMB mount ensure nobrl
is enabled.
\data\downloads
then you have a root folder set as \data\downloads
.\data\media\
for your root folder/library and \data\downloads\
for your downloads.Your download folder and your root/library folder MUST be separate
Readarr should be setup to use a category so that it only tries to process its own downloads. It is rare that a torrent submitted by gets added without the correct category, but it can happen. If you’re adding torrents manually and want to process them, they’ll need to have the correct category. It can be set at any time, since tries to process downloads every minute.
Logs will indicate errors like
No files found are eligible for import
If your torrent is packed in .rar
files, you’ll need to setup extraction. We recommend Unpackerr as it does unpacking right: preventing corrupt partial imports and cleans up the unpacked files after import.
The error by also be seen if there is no valid media file in the folder.
There are a few causes of repeated downloads, but one is related to the Indexer restriction in Release Profiles. Because the indexer isn’t stored with the data, any preferred word scores are zero for media in your library, but during “RSS” and search, they’ll be applied. This gets you into a loop where you download the items again and again because it looks like an upgrade, then isn’t, then shows up again and looks like an upgrade, then isn’t. Don’t restrict your release profile to an indexer.
This may also be due to the fact that the download never actually imports and then is missing from the queue, so a new download is perpetually grabbed and never imported. Please see the various other common problems and troubleshooting steps for this.
Readarr only looks at the 60 most recent downloads in SABnzbd and NZBGet, so if you keep your history this means that during large queues with import issues, downloads can be silently missed and not imported. The best way to avoid that is to keep your history clear, so that any items that still appear need investigating. You can achieve this by enabling Remove under Completed and Failed Download Handler. In NZBGet, this will move items to the hidden history which is great. Unfortunately, SABnzbd does not have a similar feature. The best you can achieve there is to use the nzb backup folder.
The download client should not be responsible for removing downloads. Usenet clients should be configured so they don’t remove downloads from history. Torrent clients should be setup so they don’t remove torrents when they’re finished seeding (pause or stop instead). This is because communicates with the download client to know what to import, so if they’re removed there is nothing to be imported… even if there is a folder full of files.
For SABnzbd, this is handled with the History Retention setting.
For various reasons, releases cannot be parsed once grabbed and sent to the download client. Activity => Options => Show Unknown (this is now enabled by default in recent builds) will display all items not otherwise ignored / already imported within *Arr's download client category. These will typically need to be manually mapped and imported.
This can also occur if you have a release in your download client but that media item (movie/episode/book/song) does not exist in the application.
This is caused by the indexer using a SSL protocol not supported by the current .NET Version found in Readarr => System => Status.
Readarr is getting no response from the client.
System.NET.WebException: The request timed out: ’https://example.org/api?t=caps&apikey=(removed) —> System.NET.WebException: The request timed out
2022-11-01 10:16:54.3|Warn|Newznab|Unable to connect to indexer
[v4.3.0.6671] System.Threading.Tasks.TaskCanceledException: A task was canceled.
This can also be caused by:
You can also review some common permissions and networking troubleshooting commands in our guide. Otherwise please discuss with the support team on discord. If this is something that may be a common problem, please suggest adding it to the wiki.
Parameters
is enabled in Prowlarr History => Options. The (i) icon provides additional details.The first step is to turn logging up to Trace, see Logging and Log Files for details on adjusting logging and searching logs. You’ll then reproduce the issue and use the trace level logs from that time frame to examine the issue. If someone is helping you, put context from before/after in a pastebin, Gist, or similar site to show them. It doesn’t need to be the whole file and it shouldn’t just be the error. You should also reproduce the issue while tasks that spam the log file aren’t running.
When you test an indexer or tracker, in debug or trace logs you can find the URL used. An example of a successful test is below, you can see it query the indexer via a specific URL with specific parameters and then the response. You test this url in your browser like replacing the apikey=(removed)
with the correct apikey like apikey=123
. You can experiment with the parameters if you’re getting an error from the indexer or see if you have connectivity issues if it doesn’t even work. After you’ve tested in your own browser, you should test from the system is running on if you haven’t already.
Just like the indexer/tracker test above, when you trigger a search while at Debug or Trace level logging, you can get the URL used from the log files. While testing, it is best to use as narrow a search as possible. A manual search is good because it is specific and you can see the results in the UI while examining the logs.
In this test, you’ll be looking for obvious errors and running some simple tests. You can see the search used the url UPDATED BOOK SPECIFIC URL NEEDED - THIS IS A SONARR URL EXAMPLE https://api.nzbgeek.info/api?t=tvsearch&cat=5030,5040,5045,5080&extended=1&apikey=(removed)&offset=0&limit=100&tvdbid=354629&season=1&ep=1
, which you can try yourself in a browser after replacing (removed) with your apikey for that indexer. Does it work? Do you see the expected results? Does this FAQ entry apply? In that URL, you can see that it set specific categories with cat=5030,5040,5045,5080
, so if you’re not seeing expected results, this is one likely reason. You can also see that it searched by tvdbid with tvdbid=354629
, so if the episode isn’t properly categorized on the indexer, it will need to be fixed. You can also see that it searches by specific season and episode with season=1 and ep=1, so if that isn’t correct on the indexer, you won’t see those results. Look at Manual Search XML Output below to see an example of a working query’s output.
INDEXER SEARCH RESPONSE EXAMPLE NEEDED
Images needed
Snippet of Trace Log for a Manual Search Needed
Full section of Trace Log for a Manual Search Needed
Below are some common problems.
Most likely you're using a reverse proxy and you reverse proxy timeout is set too short before *Arr has completed the search query. Increase the timeout and try again.
The book(s) is(are) not monitored.
Incorrect categories is probably the most common cause of results showing in manual searches of an indexer/tracker, but not in . The indexer/tracker should show the category in the search results, which should help you figure out what is missing. If you’re using Jackett or Prowlarr, each tracker has a list of specifically supported categories. Make sure you’re using the correct ones for Categories. Many find it helpful to have the list visible in one browser window while they edit the entry in.
Sometimes indexers will return completely unrelated results, Readarr will feed in parameters to limit the search, but the results returned are completely unrelated. Or sometimes, mostly related with a few incorrect results. The first is usually an indexer problem and you’ll be able to tell from the trace logs which is causing it. You can disable that indexer and report the problem. The other is usually categorized releases which should be reportable on the indexer/tracker.
You receive a message similar to Query successful, but no results were returned from your indexer. This may be an issue with the indexer or your indexer category settings.
This is caused by your Indexer failing to return any results that are within the categories you configured for the Indexer.
If you have results on the site you can find that are not showing in Readarr then your issue is likely one of several possibilities:
q=words%20and%20things%20here
this string is HTTP encoded and can be easily decoded using any HTML decoding/encoding tool online.You’ll be connecting to most indexers/trackers via https, so you’ll need that to work properly on your system. That means your time zone and time both need to be set correctly. It also means your system certificates need to be up to date.
If you run your through a VPN or proxy, you may be competing with 10s or 100s or 1000s of other people all trying to use services like , theXEM ,and/or your indexers and trackers. Rate limiting and DDOS protection are often done by IP address and your VPN/proxy exit point is one IP address. Unless you’re in a repressive country like China, Australia or South Africa you don’t need to VPN/proxy .
Similarly to rate limits, certain indexers - such as Nyaa - may outright ban an IP address. This is typically semi-permanent and the solution is to to get a new IP from your ISP or VPN provider.
The Jackett /all
endpoint is convenient, but that is its only benefit. Everything else is potential problems, so adding each tracker individually is required. Alternatively, you may wish to check out the Jackett & NZBHydra2 alternative Prowlarr
Even Jackett says /all should be avoided and should not be used.
Using the all endpoint has no advantages (besides reduced management overhead), only disadvantages:
Adding each indexer separately It allows for fine tuning of categories on a per indexer basis, which can be a problem with the /all
end point if using the wrong category causes errors on some trackers. In , each indexer is limited to 1000 results if pagination is supported or 100 if not, which means as you add more and more trackers to Jackett, you’re more and more likely to clip results. Finally, if one of the trackers in /all
returns an error, will disable it and now you don’t get any results.
Using NZBHydra2 as a single indexer entry (i.e. 1 NZBHydra2 Entry in Readarr for many indexers in NZBHydra2) rather than multiple (i.e. many NZBHydra2 entries in Readarr for many indexers in NZBHydra2) has the same problems as noted above with Jackett's /all
endpoint.
If a book imports with an incorrect edition, or you need to change that edition, you will need to move that book file entirely out of Readarr's root folder, then use Wanted/Manual Import to re-import it, choosing the correct edition using the drop-down at the bottom of the screen. This is the only working way to change the edition of a book after it's been imported.
You can also review some common permissions and networking troubleshooting commands in our guide. Otherwise please discuss with the support team on discord. If this is something that may be a common problem, please suggest adding it to the wiki.
These are some of the common errors you may see when adding an indexer
This is caused by the indexer using a SSL protocol not supported by the current .NET Version found in Readarr => System => Status.
Readarr is getting no response from the indexer.
System.NET.WebException: The request timed out: ’https://example.org/api?t=caps&apikey=(removed) —> System.NET.WebException: The request timed out
2022-11-01 10:16:54.3|Warn|Newznab|Unable to connect to indexer
[v4.3.0.6671] System.Threading.Tasks.TaskCanceledException: A task was canceled.
This can also be caused by:
You can also review some common permissions and networking troubleshooting commands in our guide. Otherwise please discuss with the support team on discord. If this is something that may be a common problem, please suggest adding it to the wiki.
This indicates that there is a problem with the metadata server. If the error is a 521 error, then it means the Cloudflare gateway has an issue reaching the metadata server. Either the metadata server as a whole is down temporarily, or that specific piece of it is down.
Sometimes you can still add an author by the author:authorID
search method when you get this error.
See Readarr Status for more information.