Windows 10 will reach EOS (end of support) on October 14, 2025. For more information, see this article.

搜索 | 用户支持

防范以用户支持为名的诈骗。我们绝对不会要求您拨打电话或发送短信,及提供任何个人信息。请使用“举报滥用”选项报告涉及违规的行为。

详细了解

All storage-sync/add-on settings dissappears

more options

I'm using add-on (MarkA link) to annotate links. It works pretty fine, except that "from time to time" all marks dissappeares (without any clear message or anything). The data is kept in `storage-sync-v2.sqlite` database in a form of chunks (each chunk has length of 1084 bytes) as single "big" JSON. Application data is a JSON as well (here an object where link is a key with additional data for the link). All characters in "internal" JSON, which are not allowed as "external" (the one with "na" objects) are escaped with '\'. I observed, that storage_sync_data::data might be incorrectly split, what causes the escape character escapes "external" JSON (I mean it joins two `na` chunks , which after that cannot be parsed (and whole data is removed), e.g.: ``` {\"grp\":\"bibl\",\"type\":\"Mark\","na_9":"",\"mark\":\"\"}, ``` where "external" JSON is broken on `Mark` (as " from "internal" document is split into two entries, and closing " is escaped with a '\' from a `Mark` tag.

Things goes even worse, if the properties allow to contain some "special" characters escaped with '\' (e.g. new line, quotation mark, backslash, etc).

I'm using add-on (MarkA link) to annotate links. It works pretty fine, except that "from time to time" all marks dissappeares (without any clear message or anything). The data is kept in `storage-sync-v2.sqlite` database in a form of chunks (each chunk has length of 1084 bytes) as single "big" JSON. Application data is a JSON as well (here an object where link is a key with additional data for the link). All characters in "internal" JSON, which are not allowed as "external" (the one with "na" objects) are escaped with '\'. I observed, that storage_sync_data::data might be incorrectly split, what causes the escape character escapes "external" JSON (I mean it joins two `na` chunks , which after that cannot be parsed (and whole data is removed), e.g.: ``` {\"grp\":\"bibl\",\"type\":\"Mark\","na_9":"",\"mark\":\"\"}, ``` where "external" JSON is broken on `Mark` (as " from "internal" document is split into two entries, and closing " is escaped with a '\' from a `Mark` tag. Things goes even worse, if the properties allow to contain some "special" characters escaped with '\' (e.g. new line, quotation mark, backslash, etc).

所有回复 (4)

more options

Hi Marek

This does not sound like an issue with Firefox but is something that the independent author of that add-on should be able to help you with.

有帮助吗?

more options

I don't know how to reconstruct data directly from extension storage databases (some kind of compression is applied). However, you could take a look at the data Firefox is reading through the following method:

(1) Type or paste about:debugging in the address bar and press Enter to load that page

(2) In the left column, click "This Firefox"

(3) Scroll down in the extension list until you find MarkA link, then click the "Inspect" button to open a dev tools window

(4) Change to the Storage tab, then expand the Extension Storage category to see what data Firefox is reading from extension storage. Some extensions might also used IndexedDB.


Note: if there is a problem with applying saved highlights or applying new ones, it could be related to a change in Firefox 140. See: https://support.mozilla.org/questions/1522859

有帮助吗?

more options

I briefly looked at the source code of MarkALink, and it mostly uses local storage, not sync storage.

有帮助吗?

more options

It uses both local storage (to keep some data on groups, etc) and `chrome.storage.sync`/`storage-sync-v2.sqlite` (to keep links with annotations, etc). The second part is the failing one. Reading helpers.js, it is not clear to me where is the bug: either MarkALink author should stringify/split his data to have "overall" JSON parsable (t.i. he should know that the data is stored as JSON) or chrome.storage.sync.set() should be responsible for the job. It seems that storing stringified and "constant size" splitted JSON causes whole storage to be unparsable (and removed). I need to check how the document is handled in Chrome (as it wasn't failing there, but the failure depends on the data, and therefore I'm not 100% sure).

有帮助吗?

我要提问

您需要登录才能回复。如果您还没账号,可以提出新问题