This commit is contained in:
2026-01-22 21:04:23 -05:00
commit 14b9e814da
6 changed files with 1309 additions and 0 deletions

293
README.md Normal file
View File

@@ -0,0 +1,293 @@
# Anki tools for language learning
A modular collection of tools and scripts to enhance your anki-based language learning. These tools focus on listening, sentence mining, sentence decks, and more. Built for language learners and immersion enthusiasts.
### Tools Overview
| Tool | Purpose |
|---------------------------------|-------------------------------------------------------------------------|
| `audio-extractor` | Extract Anki card audio by language into playlists for passive listening |
| `batch_importer` | Generate TTS audio from sentence lists and import into Anki |
| `word-scraper` | Extract & lemmatize words from Anki decks (frequency analysis, mining) |
| `yt-transcript` | Mine vocabulary/sentences from YouTube transcripts for analysis |
| `deck-converter`* | Convert TSV+audio into `.apkg` Anki decks using config-driven workflow |
| `youtube-to-anki`* | Convert YouTube subtitles/audio into fully timestamped Anki cards |
*=coming soon
### Requirements
Each tool has its own set of dependencies. Common dependencies includes
- Python3
- [Anki](https://apps.ankiweb.net/) with [AnkiConnect](https://github.com/amikey/anki-connect)
- `yt-dlp`, `jq`, `yq`, `spaCy`, `gTTS`, `youtube-transcript-api`, `pyyaml`, `genanki`, `fugashi`, `regex`, `requests`
- `ffmpeg`
Personally, I like to have on venv that contains all the prerequisites.
```shell
python3.12 -m venv ~/.venv/anki-tools
source ~/.venv/anki-tools/bin/activate
python3 -m pip install -U pip
pip install gtts jq yq spacy youtube-transcript-api pyyaml genanki fugashi regex requests
# Also install ffmpeg
sudo dnf install ffmpeg
```
That way, whenever you want to run these scripts, you can just source the venv and run the appropriate script.
```shell
source ~/.venv/anki-tools/bin/activate
```
### Getting started
Clone the repository:
```shell
git clone https://git.pawelsarkowicz.xyz/ps/anki-tools.git
cd anki-tools
```
Then explore.
Most scripts assume:
- Anki is running
- the AnkiConnect add-on is enabled (default: http://localhost:8765)
- that your anki cards are basic, with audio on the front and the sentence (in the target language) on the back. These tools only look at the first line of the back, so you can have notes/translations/etc. on the following lines if you like.
![anki_basic_card_jp](./figures/anki_basic_card_jp.png)
### Language support
- 🇯🇵 Japanese
- 🇪🇸 Spanish
- 🇬🇧 English
## audio-extractor
**Purpose**: Extract audio referenced by `[sound:...]` tags from Anki decks, grouped by language.
### Usage:
```bash
./extract_anki_audio.py jp [--concat] [--outdir DIR] [--copy-only-new]
./extract_anki_audio.py es [--concat] [--outdir DIR] [--copy-only-new]
```
Outputs:
- Copies audio into `~/Documents/anki-audio/<language>/` by default
- Writes `<language>.m3u`
- With `--concat`, writes `<language>_concat.mp3` (keeps individual files)
### Requirements
- Anki + AnkiConnect
- `requests`
- `ffmpeg` (only if you use `--concat`)
## batch_importer
**Purpose**: Generate TTS audio from a sentence list and add notes to Anki via AnkiConnect.
### Usage
```bash
./batch_anki_import.sh [jp|es] [--concat] [--outdir DIR]
```
- Keeps all individual MP3s.
- If `--concat` is passed, also writes one combined MP3 for the run.
### Requirements
- Anki + AnkiConnect
- `gtts-cli`, `ffmpeg`, `curl`
### Sentence files
- Japanese: `~/Documents/sentences_jp.txt`
- Spanish: `~/Documents/sentences_es.txt`
## word-scraper
Extract frequent words from Anki notes using **AnkiConnect** and **spaCy**.
This is primarily intended for language learning workflows (currently Japanese and Spanish).
The script:
- queries notes from Anki
- extracts visible text from a chosen field
- tokenizes with spaCy
- filters out stopwords / grammar
- counts word frequencies
- writes a sorted word list to a text file
### Requirements
- Anki + AnkiConnect - Python **3.12** (recommended; spaCy is not yet stable on 3.14)
- `spacy`, `regex`, `requests`
- spaCy models:
```bash
python -m spacy download es_core_news_sm
python -m spacy download ja_core_news_lg
```
### Usage
```bash
./word_scraper.py {jp,es} [options]
```
| Option | Description |
| --------------------- | -------------------------------------------------------------------- |
| `--query QUERY` | Full Anki search query (e.g. `deck:"Español" tag:foo`) |
| `--deck DECK` | Deck name (repeatable). If omitted, decks are inferred from language |
| `--field FIELD` | Note field to read (default: `Back`) |
| `--min-freq N` | Minimum frequency to include (default: `2`) |
| `--outdir DIR` | Output directory (default: `~/Documents/anki-words/<language>`) |
| `--out FILE` | Output file path (default: `<outdir>/words_<lang>.txt`) |
| `--full-field` | Use full field text instead of only the first visible line |
| `--spacy-model MODEL` | Override spaCy model name |
| `--logfile FILE` | Log file path |
### Examples
#### Basic usage (auto-detected decks)
```bash
./word_scraper.py jp
./word_scraper.py es
```
#### Specify a deck explicitly
```bash
./word_scraper.py jp --deck "日本語"
./word_scraper.py es --deck "Español"
```
#### Use a custom Anki query
```bash
./word_scraper.py es --query 'deck:"Español" tag:youtube'
```
#### Change output location and frequency threshold
```bash
./word_scraper.py jp --min-freq 3 --out words_jp.txt
./word_scraper.py es --outdir ~/tmp/words --out spanish_words.txt
```
#### Process full field text (not just first line)
```bash
./word_scraper.py jp --full-field
```
### Output format
The output file contains one entry per line:
```
word frequency
```
Examples:
```
comer 12
hablar 9
行く (行き) 8
見る (見た) 6
```
- Spanish output uses lemmas
- Japanese output includes lemma (surface) when they differ
### Language-specific notes
#### Japanese
- Filters out particles and common grammar
- Keeps nouns, verbs, adjectives, and proper nouns
- Requires `regex` for Unicode script matching
#### Spanish
- Filters stopwords
- Keeps alphabetic tokens only
- Lemmatized output
## yt-transcript
Extract vocabulary or sentence-level text from YouTube video subtitles (transcripts), for language learning or analysis.
The script:
- fetches captions via `youtube-transcript-api`
- supports **Spanish (es)** and **Japanese (jp)**
- tokenizes Japanese using **MeCab (via fugashi)**
- outputs either:
- word frequency lists, or
- timestamped transcript lines
### Features
- Extract full vocabulary lists with frequency counts
- Extract sentences (with timestamps or sentence indices)
- Support for Japanese tokenization
- Optional: stopword filtering
- Modular and extendable for future features like CSV export or audio slicing
### Requirements
- `youtube-transcript-api`
- For Japanese tokenization:
```
pip install "fugashi[unidic-lite]"
```
### Usage
```shell
./yt-transcript.py {jp,es} <video_url_or_id> [options]
```
### Options
| Option | Description |
| -------------------------- | -------------------------------------- |
| `--mode {vocab,sentences}` | Output mode (default: `vocab`) |
| `--top N` | Show only the top N words (vocab mode) |
| `--no-stopwords` | Keep common words |
| `--raw` | (Spanish only) Do not lowercase tokens |
### Examples
#### Extract Spanish vocabulary
```bash
./yt-transcript.py es https://youtu.be/VIDEO_ID
```
#### Top 50 words
```bash
./yt-transcript.py es VIDEO_ID --top 50
```
#### Japanese transcript with timestamps
```bash
./yt-transcript.py jp VIDEO_ID --mode sentences
```
#### Keep Spanish casing and stopwords
```bash
./yt-transcript.py es VIDEO_ID --raw --no-stopwords
```
### Output formats
#### Vocabulary mode
```
palabra: count
```
Example:
```
comer: 12
hablar: 9
```
#### Sentence mode
```
[12.34s] sentence text here
```
Example:
```
[45.67s] 今日はいい天気ですね
```
### Language Notes
#### Spanish
- Simple regex-based tokenizer
- Accented characters supported
- Lowercased by default
#### Japanese
- Uses fugashi (MeCab)
- Outputs surface forms
- Filters via stopword list only (no POS filtering)
# License

254
audio_extractor.py Executable file
View File

@@ -0,0 +1,254 @@
#!/usr/bin/env python3
"""
audio_extractor.py
Extract all Anki media referenced by [sound:...] tags from one or more decks (grouped by language),
copy them into a language-specific output folder, write an .m3u playlist, and optionally concatenate
all audio into a single MP3 file.
Howto:
./audio_extractor.py jp [--concat] [--outdir DIR] [--copy-only-new]
./audio_extractor.py es [--concat] [--outdir DIR] [--copy-only-new]
Requirements:
- Anki running + AnkiConnect enabled at http://localhost:8765
- Python package: requests
- OPTIONAL (for --concat): ffmpeg
Notes:
- This scans all fields of each note and extracts filenames inside [sound:...]
- It copies referenced media files out of Anki's collection.media folder
- It preserves filenames (and subfolders if they exist)
"""
import os
import re
import sys
import argparse
import shutil
import subprocess
import tempfile
from typing import Dict, List
import requests
# Map deck name -> language bucket
deck_to_language: Dict[str, str] = {
"日本語": "japanese",
"Español": "spanish",
# Add more mappings here
}
# Map CLI lang code -> language bucket
lang_map: Dict[str, str] = {
"jp": "japanese",
"es": "spanish",
}
# If Anki is installed as a flatpak, media dir is typically:
media_dir = os.path.expanduser("~/.var/app/net.ankiweb.Anki/data/Anki2/User 1/collection.media")
# Default export root (can be overridden by --outdir)
output_root = os.path.expanduser("~/Documents/anki-audio")
AUDIO_EXTS = (".mp3", ".wav", ".ogg", ".m4a", ".flac")
def anki_request(action: str, **params):
"""Make an AnkiConnect request and return 'result'. Raise on error."""
resp = requests.post(
"http://localhost:8765",
json={"action": action, "version": 6, "params": params},
timeout=30,
)
resp.raise_for_status()
data = resp.json()
if data.get("error") is not None:
raise RuntimeError(f"AnkiConnect error for {action}: {data['error']}")
return data["result"]
def ensure_ffmpeg_available() -> None:
"""Raise a helpful error if ffmpeg isn't installed."""
if shutil.which("ffmpeg") is None:
raise RuntimeError("ffmpeg not found in PATH. Install ffmpeg to use --concat.")
def build_playlist(out_dir: str, language: str) -> str:
"""
Create an .m3u playlist listing audio files in out_dir (sorted by filename).
Returns the playlist path.
"""
m3u_path = os.path.join(out_dir, f"{language}.m3u")
files = sorted(
f for f in os.listdir(out_dir)
if f.lower().endswith(AUDIO_EXTS) and os.path.isfile(os.path.join(out_dir, f))
)
with open(m3u_path, "w", encoding="utf-8") as fh:
for fname in files:
fh.write(f"{fname}\n")
return m3u_path
def concat_audio_from_m3u(out_dir: str, m3u_path: str, out_path: str) -> None:
"""
Concatenate audio files in the order listed in the .m3u.
Uses ffmpeg concat demuxer and re-encodes to MP3 for reliability.
Keeps original files untouched.
"""
ensure_ffmpeg_available()
# Read playlist entries (filenames, one per line)
with open(m3u_path, "r", encoding="utf-8") as fh:
rel_files = [line.strip() for line in fh if line.strip()]
# Filter to existing audio files
abs_files: List[str] = []
for rel in rel_files:
p = os.path.join(out_dir, rel)
if os.path.isfile(p) and rel.lower().endswith(AUDIO_EXTS):
abs_files.append(os.path.abspath(p))
if not abs_files:
raise RuntimeError("No audio files found to concatenate (playlist is empty?).")
# ffmpeg concat demuxer expects a file with lines like: file '/abs/path/to/file'
# Use a temp file so we don't leave junk behind if ffmpeg fails.
with tempfile.NamedTemporaryFile("w", delete=False, encoding="utf-8") as tmp:
concat_list_path = tmp.name
for p in abs_files:
# Escape single quotes for ffmpeg concat list
safe = p.replace("'", "'\\''")
tmp.write(f"file '{safe}'\n")
# Re-encode to MP3 to avoid header/codec mismatches across files
cmd = [
"ffmpeg",
"-hide_banner",
"-loglevel", "error",
"-f", "concat",
"-safe", "0",
"-i", concat_list_path,
"-c:a", "libmp3lame",
"-q:a", "4",
"-y",
out_path,
]
try:
subprocess.run(cmd, check=True)
finally:
try:
os.remove(concat_list_path)
except OSError:
pass
def main() -> int:
parser = argparse.ArgumentParser(
description="Extract Anki audio by language."
)
# REQUIRED positional language code: jp / es
parser.add_argument(
"lang",
choices=sorted(lang_map.keys()),
help="Language code (jp or es).",
)
# Match bash-style flags
parser.add_argument(
"--concat",
action="store_true",
help="Also output a single concatenated MP3 file (in playlist order).",
)
parser.add_argument(
"--outdir",
help="Output directory. Default: ~/Documents/anki-audio/<language>",
)
# Keep your existing useful behavior
parser.add_argument(
"--copy-only-new",
action="store_true",
help="Skip overwriting existing files.",
)
args = parser.parse_args()
language = lang_map[args.lang]
# Find all decks whose mapped language matches
selected_decks = [deck for deck, lang in deck_to_language.items() if lang == language]
if not selected_decks:
print(f"No decks found for language: {language}", file=sys.stderr)
return 1
# Output folder: either user-specified --outdir or default output_root/<language>
out_dir = os.path.expanduser(args.outdir) if args.outdir else os.path.join(output_root, language)
os.makedirs(out_dir, exist_ok=True)
# Collect note IDs across selected decks
all_ids: List[int] = []
for deck in selected_decks:
ids = anki_request("findNotes", query=f'deck:"{deck}"')
all_ids.extend(ids)
if not all_ids:
print(f"No notes found in decks for language: {language}")
return 0
# Fetch notes info (fields contain [sound:...] references)
notes = anki_request("notesInfo", notes=all_ids)
# Copy referenced audio files into out_dir
copied: List[str] = []
for note in notes:
fields = note.get("fields", {})
for field in fields.values():
val = field.get("value", "") or ""
for match in re.findall(r"\[sound:(.+?)\]", val):
src = os.path.join(media_dir, match)
dst = os.path.join(out_dir, match)
if not os.path.exists(src):
continue
# If Anki stored media in subfolders, ensure the subfolder exists in out_dir
dst_parent = os.path.dirname(dst)
if dst_parent:
os.makedirs(dst_parent, exist_ok=True)
if args.copy_only_new and os.path.exists(dst):
continue
shutil.copy2(src, dst)
copied.append(match)
# Create playlist (top-level audio only; if you have subfolders, you can extend this)
m3u_path = build_playlist(out_dir, language)
print(f"\n✅ Copied {len(copied)} files for {language}")
print(f"🎵 Playlist created at: {m3u_path}")
print(f"📁 Output directory: {out_dir}")
# Optional: concatenate all audio into one MP3 (order = playlist order)
if args.concat:
concat_out = os.path.join(out_dir, f"{language}_concat.mp3")
try:
concat_audio_from_m3u(out_dir, m3u_path, concat_out)
print(f"🎧 Concatenated file created at: {concat_out}")
except Exception as e:
print(f"❌ Concatenation failed: {e}", file=sys.stderr)
return 1
return 0
if __name__ == "__main__":
raise SystemExit(main())

141
batch_anki_import.sh Executable file
View File

@@ -0,0 +1,141 @@
#!/bin/bash
prog="$(basename "$0")"
print_help() {
cat <<EOF
usage: $prog [-h] {es,jp}
positional arguments:
{es,jp}
options:
-h, --help show this help message and exit
EOF
}
arg_error_missing_lang() {
echo "usage: $prog [-h] {es,jp}" >&2
echo "$prog: error: the following arguments are required: lang" >&2
exit 2
}
arg_error_unknown() {
echo "usage: $prog [-h] {es,jp}" >&2
echo "$prog: error: unrecognized arguments: $*" >&2
exit 2
}
lang=""
while [[ $# -gt 0 ]]; do
case "$1" in
-h|--help)
print_help
exit 0
;;
jp|es)
if [[ -n "$lang" ]]; then
arg_error_unknown "$1"
fi
lang="$1"
shift
;;
*)
arg_error_unknown "$1"
;;
esac
done
[[ -z "$lang" ]] && arg_error_missing_lang
case "$lang" in
jp)
DECK_NAME="日本語"
LANG_CODE="ja"
TLD="com"
TEMPO="1.35"
SENTENCE_FILE="$HOME/Documents/sentences_jp.txt"
;;
es)
DECK_NAME="Español"
LANG_CODE="es"
TLD="es"
TEMPO="1.25"
SENTENCE_FILE="$HOME/Documents/sentences_es.txt"
;;
esac
TAGS='["AI-generated", "text-to-speech"]'
count=0
# Use a temporary directory to handle processing
TEMP_DIR=$(mktemp -d)
while IFS= read -r sentence || [[ -n "$sentence" ]]; do
[[ -z "$sentence" ]] && continue
# Generate unique filenames
BASENAME="tts_$(date +%Y%m%d_%H%M%S)_${lang}_$RANDOM"
# Path for the raw output from gtts
RAW_OUTPUT="$TEMP_DIR/${BASENAME}_original.mp3"
# Path for the sped-up output that goes to Anki
OUTPUT_PATH="$TEMP_DIR/${BASENAME}.mp3"
echo "🔊 Processing: $sentence"
# 1. Generate TTS with specific TLD
if gtts-cli "$sentence" --lang "$LANG_CODE" --tld "$TLD" --output "$RAW_OUTPUT"; then
# 2. Speed up audio using ffmpeg without changing pitch
if ffmpeg -loglevel error -i "$RAW_OUTPUT" -filter:a "atempo=$TEMPO" -y "$OUTPUT_PATH" < /dev/null; then
# 3. Add to Anki using the sped-up file
result=$(curl -s localhost:8765 -X POST -d "{
\"action\": \"addNote\",
\"version\": 6,
\"params\": {
\"note\": {
\"deckName\": \"$DECK_NAME\",
\"modelName\": \"Basic\",
\"fields\": {
\"Front\": \"\",
\"Back\": \"$sentence\"
},
\"options\": {
\"allowDuplicate\": false
},
\"tags\": $TAGS,
\"audio\": [{
\"path\": \"$OUTPUT_PATH\",
\"filename\": \"${BASENAME}.mp3\",
\"fields\": [\"Front\"]
}]
}
}
}")
if [[ "$result" == *'"error": null'* ]]; then
echo "✅ Added card: $sentence"
((count++))
else
echo "❌ Failed to add card: $sentence"
echo "$result"
fi
else
echo "❌ Failed to speed up audio for: $sentence"
fi
# 4. Cleanup
rm -f "$OUTPUT_PATH" "$RAW_OUTPUT"
else
echo "❌ Failed to generate TTS for: $sentence"
fi
done <"$SENTENCE_FILE"
# Cleanup temp directory
rm -rf "$TEMP_DIR"
echo "🎉 Done! Added $count cards to deck \"$DECK_NAME\"."

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

422
word_scraper.py Executable file
View File

@@ -0,0 +1,422 @@
#!/usr/bin/env python3
"""
word_extractor.py
Extract frequent words/lemmas from Anki notes via AnkiConnect.
Howto:
./word_extractor.py jp [--deck "日本語"] [--field Back] [--min-freq 2] [--outdir DIR] [--out FILE]
./word_extractor.py es [--deck "Español"] [--field Back] [--min-freq 2] [--outdir DIR] [--out FILE]
By default, this:
- chooses decks based on the lang code (jp/es) using deck_to_language mappings
- pulls notes from Anki via AnkiConnect (http://localhost:8765)
- reads a single field (default: Back)
- extracts the first visible line (HTML stripped) from that field
- tokenizes with spaCy and counts words
- writes "token count" lines sorted by descending count
Notes:
- spaCy currently may not work on Python 3.14 in your environment.
If spaCy import/load fails, create a Python 3.12 venv for this script.
"""
from __future__ import annotations
import argparse
import logging
import os
import sys
from collections import Counter
from html import unescape
from typing import Callable, Dict, Iterable, List, Optional, Tuple
import requests
import regex as re
# -------------------------
# Shared “language plumbing”
# -------------------------
# Match the idea used in audio_extractor.py: CLI lang code -> language bucket. :contentReference[oaicite:2]{index=2}
LANG_MAP: Dict[str, str] = {
"jp": "japanese",
"es": "spanish",
}
# Map deck name -> language bucket (same pattern as audio_extractor.py). :contentReference[oaicite:3]{index=3}
DECK_TO_LANGUAGE: Dict[str, str] = {
"日本語": "japanese",
"Español": "spanish",
# Add more deck mappings here
}
# Default output root (mirrors the “one folder per language” idea)
DEFAULT_OUTPUT_ROOT = os.path.expanduser("~/Documents/anki-words")
# -------------------------
# Logging
# -------------------------
def setup_logging(logfile: str) -> None:
os.makedirs(os.path.dirname(os.path.abspath(logfile)), exist_ok=True)
logging.basicConfig(
filename=logfile,
level=logging.INFO,
format="%(asctime)s [%(levelname)s] %(message)s",
)
# -------------------------
# HTML cleanup helpers
# -------------------------
def extract_first_visible_line(text: str) -> str:
"""Remove common HTML and return only the first visible line."""
text = unescape(text or "")
text = re.sub(r"</?(br|div|p)[^>]*>", "\n", text, flags=re.IGNORECASE)
text = re.sub(r"<[^>]+>", "", text)
text = text.strip()
return text.splitlines()[0] if text else ""
def extract_visible_text(text: str) -> str:
"""Remove common HTML and return all visible text as a single string."""
text = unescape(text or "")
text = re.sub(r"</?(br|div|p)[^>]*>", "\n", text, flags=re.IGNORECASE)
text = re.sub(r"<[^>]+>", "", text)
# Normalize whitespace a bit
text = re.sub(r"[ \t]+", " ", text)
text = re.sub(r"\n{2,}", "\n", text)
return text.strip()
# -------------------------
# AnkiConnect helper
# -------------------------
def anki_request(action: str, **params):
"""
Make an AnkiConnect request and return 'result'.
Raises a helpful error if the HTTP call fails or AnkiConnect returns an error.
"""
resp = requests.post(
"http://localhost:8765",
json={"action": action, "version": 6, "params": params},
timeout=30,
)
resp.raise_for_status()
data = resp.json()
if data.get("error") is not None:
raise RuntimeError(f"AnkiConnect error for {action}: {data['error']}")
return data["result"]
def get_notes(query: str) -> List[dict]:
"""
Query Anki for notes and return notesInfo payload.
"""
note_ids = anki_request("findNotes", query=query) or []
if not note_ids:
return []
return anki_request("notesInfo", notes=note_ids) or []
# -------------------------
# Language-specific token rules (spaCy-based)
# -------------------------
JAPANESE_CHAR_RE = re.compile(r"[\p{Script=Hiragana}\p{Script=Katakana}\p{Script=Han}ー]+")
JAPANESE_PARTICLES = {
"", "", "", "", "", "", "", "", "", "から", "まで", "より", "", "なら",
"", "", "", "", "", "", "", "", "", "って", "とき", "ってば", "けど", "けれど",
"しかし", "でも", "ながら", "ほど", "", "もの", "こと", "ところ", "よう", "らしい", "られる",
}
JAPANESE_GRAMMAR_EXCLUDE = {
"", "", "ます", "れる", "てる", "", "", "しまう", "いる", "ない", "なる", "ある", "", "です",
}
JAPANESE_ALLOWED_POS = {"NOUN", "PROPN", "VERB", "ADJ"}
def japanese_filter(token) -> bool:
"""
Filter Japanese tokens to keep “content-ish” words and avoid particles/grammar glue.
Assumes a Japanese spaCy model that provides lemma_ and pos_ reasonably.
"""
text = (token.text or "").strip()
lemma = (token.lemma_ or "").strip()
if not text:
return False
# Must look like Japanese script (hiragana/katakana/kanji/ー)
if not JAPANESE_CHAR_RE.fullmatch(text):
return False
# Drop obvious grammar / particles
if lemma in JAPANESE_GRAMMAR_EXCLUDE or text in JAPANESE_PARTICLES:
return False
# Keep only selected parts of speech
if getattr(token, "pos_", None) not in JAPANESE_ALLOWED_POS:
return False
# Drop URLs/emails/stopwords when model flags them
if getattr(token, "is_stop", False) or getattr(token, "like_url", False) or getattr(token, "like_email", False):
return False
# Defensive: drop tokens that look like HTML fragments or garbage
if any(c in text for c in "<>=/\\:&%"):
return False
if text in {"ruby", "rt", "div", "br", "nbsp", "href", "strong", "a"}:
return False
return True
def spanish_filter(token) -> bool:
"""
Keep alpha tokens that are not stopwords. (spaCy handles accent marks fine here.)
"""
return bool(getattr(token, "is_alpha", False)) and not bool(getattr(token, "is_stop", False))
def spanish_format(token) -> str:
return (token.lemma_ or token.text or "").lower().strip()
def japanese_format(token) -> str:
# Keep both lemma and surface form (useful when lemma normalization is aggressive)
lemma = (token.lemma_ or "").strip()
surface = (token.text or "").strip()
if not lemma and not surface:
return ""
if lemma and surface and lemma != surface:
return f"{lemma} ({surface})"
return lemma or surface
LANGUAGE_PROFILES = {
"spanish": {
"spacy_model": "es_core_news_sm",
"token_filter": spanish_filter,
"output_format": spanish_format,
},
"japanese": {
"spacy_model": "ja_core_news_lg",
"token_filter": japanese_filter,
"output_format": japanese_format,
},
}
def load_spacy_model(model_name: str):
"""
Import spaCy lazily and load a model.
This lets us show clearer errors when spaCy is missing/broken in the environment.
"""
try:
import spacy # type: ignore
except Exception as e:
raise RuntimeError(
"Failed to import spaCy. If you're on Python 3.14, spaCy may not be compatible yet.\n"
"Use a Python 3.12 venv for this script."
) from e
try:
return spacy.load(model_name)
except Exception as e:
raise RuntimeError(
f"Failed to load spaCy model '{model_name}'.\n"
f"Try: python -m spacy download {model_name}"
) from e
# -------------------------
# Core extraction
# -------------------------
def extract_counts(
notes: List[dict],
field_name: str,
nlp,
token_filter: Callable,
output_format: Callable,
use_full_field: bool,
) -> Counter:
"""
For each note, take the specified field, strip HTML, tokenize, and count.
"""
counter: Counter = Counter()
for note in notes:
fields = note.get("fields", {}) or {}
raw_val = (fields.get(field_name, {}) or {}).get("value", "") or ""
text = extract_visible_text(raw_val) if use_full_field else extract_first_visible_line(raw_val)
if not text:
continue
doc = nlp(text)
for token in doc:
if token_filter(token):
key = output_format(token)
if key:
counter[key] += 1
return counter
def write_counts(counter: Counter, out_path: str, min_freq: int) -> int:
"""
Write "token count" lines sorted by descending count.
Returns the number of written entries.
"""
items = [(w, c) for (w, c) in counter.items() if c >= min_freq]
items.sort(key=lambda x: (-x[1], x[0]))
os.makedirs(os.path.dirname(os.path.abspath(out_path)), exist_ok=True)
with open(out_path, "w", encoding="utf-8") as f:
for word, freq in items:
f.write(f"{word} {freq}\n")
return len(items)
def build_query_from_decks(decks: List[str]) -> str:
"""
Build an Anki query that OR's multiple deck:"..." clauses.
"""
# deck:"日本語" OR deck:"日本語::subdeck" is possible but we keep it simple.
parts = [f'deck:"{d}"' for d in decks]
return " OR ".join(parts)
# -------------------------
# Main CLI
# -------------------------
def main() -> int:
parser = argparse.ArgumentParser(
description="Extract frequent words from Anki notes (CLI resembles other toolkit scripts)."
)
# Match "positional lang” style (jp/es)
parser.add_argument("lang", choices=sorted(LANG_MAP.keys()), help="Language code (jp or es).")
# Let you override deck selection, but keep sane defaults:
# - if --query is provided, we use that exactly
# - else if --deck is provided (repeatable), we use those decks
# - else we infer decks from DECK_TO_LANGUAGE mapping
group = parser.add_mutually_exclusive_group()
group.add_argument(
"--query",
help='Full Anki search query (e.g. \'deck:"Español" tag:foo\'). Overrides --deck.',
)
group.add_argument(
"--deck",
action="append",
help='Deck name (repeatable). Example: --deck "日本語" --deck "日本語::Subdeck"',
)
# Similar “bashy” knobs
parser.add_argument("--field", default="Back", help="Which note field to read (default: Back).")
parser.add_argument("--min-freq", type=int, default=2, help="Minimum frequency to include (default: 2).")
parser.add_argument("--outdir", help="Output directory (default: ~/Documents/anki-words/<language>).")
parser.add_argument("--out", help="Output file path (default: <outdir>/words_<lang>.txt).")
parser.add_argument(
"--full-field",
action="store_true",
help="Use the full field text (HTML stripped) instead of only the first visible line.",
)
parser.add_argument(
"--spacy-model",
help="Override the spaCy model name (advanced).",
)
parser.add_argument(
"--logfile",
default=os.path.expanduser("~/Documents/anki-words/extract_words.log"),
help="Log file path.",
)
args = parser.parse_args()
setup_logging(args.logfile)
language_bucket = LANG_MAP[args.lang]
profile = LANGUAGE_PROFILES.get(language_bucket)
if not profile:
print(f"❌ Unsupported language bucket: {language_bucket}", file=sys.stderr)
return 1
# Resolve query / decks
if args.query:
query = args.query
else:
if args.deck:
decks = args.deck
else:
decks = [d for d, lang in DECK_TO_LANGUAGE.items() if lang == language_bucket]
if not decks:
print(f"❌ No decks mapped for language: {language_bucket}", file=sys.stderr)
return 1
query = build_query_from_decks(decks)
# Output paths
out_dir = os.path.expanduser(args.outdir) if args.outdir else os.path.join(DEFAULT_OUTPUT_ROOT, language_bucket)
default_outfile = os.path.join(out_dir, f"words_{args.lang}.txt")
out_path = os.path.expanduser(args.out) if args.out else default_outfile
logging.info("lang=%s bucket=%s query=%s field=%s", args.lang, language_bucket, query, args.field)
print(f"🔎 Query: {query}")
print(f"🧾 Field: {args.field}")
# Load spaCy model
model_name = args.spacy_model or profile["spacy_model"]
try:
nlp = load_spacy_model(model_name)
except Exception as e:
print(f"{e}", file=sys.stderr)
logging.exception("spaCy load failed")
return 1
# Fetch notes
try:
notes = get_notes(query)
except Exception as e:
print(f"❌ Failed to query AnkiConnect: {e}", file=sys.stderr)
logging.exception("AnkiConnect query failed")
return 1
print(f"✅ Found {len(notes)} notes.")
if not notes:
print("⚠️ No notes found. Check your query/deck names.")
return 0
# Validate the field exists on at least one note
fields0 = (notes[0].get("fields", {}) or {})
if args.field not in fields0:
available = list(fields0.keys())
print(f"❌ Field '{args.field}' not found on sample note.", file=sys.stderr)
print(f" Available fields: {available}", file=sys.stderr)
return 1
# Extract + write
counter = extract_counts(
notes=notes,
field_name=args.field,
nlp=nlp,
token_filter=profile["token_filter"],
output_format=profile["output_format"],
use_full_field=args.full_field,
)
print(f"🧠 Extracted {len(counter)} unique entries (before min-freq filter).")
written = write_counts(counter, out_path, args.min_freq)
print(f"📄 Wrote {written} entries to: {out_path}")
logging.info("wrote=%s out=%s", written, out_path)
return 0
if __name__ == "__main__":
raise SystemExit(main())

199
yt-transcript.py Executable file
View File

@@ -0,0 +1,199 @@
#!/usr/bin/env python3
"""
yt-transcript.py
Extract vocab or timestamped lines from a YouTube transcript.
Howto:
./yt-transcript.py {jp,es} <video_url_or_id> [options]
Examples:
./yt-transcript.py es https://youtu.be/SLgVwNulYhc --mode vocab --top 50
./yt-transcript.py jp SLgVwNulYhc --mode sentences
Requirements:
pip install youtube-transcript-api
Japanese tokenization (recommended "Option 1"):
pip install "fugashi[unidic-lite]"
"""
from __future__ import annotations
import re
import sys
import argparse
from collections import Counter
from urllib.parse import urlparse, parse_qs
from youtube_transcript_api import YouTubeTranscriptApi
# -------------------------
# Language mapping
# -------------------------
LANG_MAP = {
"jp": "ja",
"es": "es",
}
# Small starter stopword lists (you can grow these over time)
STOPWORDS = {
"es": {
"de", "la", "que", "el", "en", "y", "a", "los", "del", "se", "las", "por",
"un", "para", "con", "no", "una", "su", "al", "lo", "como",
},
"en": {"the", "is", "and", "of", "to", "in", "it", "that", "on", "you", "this", "for", "with"},
"ja": {"", "", "", "", "", "", "", "", "です", "ます", "する", "ある", "いる"},
}
# -------------------------
# URL / transcript helpers
# -------------------------
def extract_video_id(url_or_id: str) -> str:
"""Accept full YouTube URLs (including youtu.be) or raw video IDs."""
if "youtube" in url_or_id or "youtu.be" in url_or_id:
query = urlparse(url_or_id)
# youtu.be/<id>
if query.hostname == "youtu.be":
return query.path.lstrip("/")
# youtube.com/watch?v=<id>
if query.hostname in ("www.youtube.com", "youtube.com", "m.youtube.com"):
qs = parse_qs(query.query)
v = qs.get("v", [])
if v:
return v[0]
return url_or_id
def fetch_transcript(video_id: str, lang_code: str):
"""
Support both youtube-transcript-api v1.x and older v0.x.
- v1.x: instance method .fetch(video_id, languages=[...]) -> list of snippet objects
- v0.x: class method .get_transcript(video_id, languages=[...]) -> list of dicts
"""
# Newer API (v1.x)
if hasattr(YouTubeTranscriptApi, "fetch"):
api = YouTubeTranscriptApi()
return api.fetch(video_id, languages=[lang_code])
# Older API (v0.x)
if hasattr(YouTubeTranscriptApi, "get_transcript"):
return YouTubeTranscriptApi.get_transcript(video_id, languages=[lang_code])
raise RuntimeError("Unsupported youtube-transcript-api version (missing fetch/get_transcript).")
def snippet_text(entry) -> str:
"""Entry can be a dict (old API) or a snippet object (new API)."""
if isinstance(entry, dict):
return (entry.get("text", "") or "")
return (getattr(entry, "text", "") or "")
def snippet_start(entry) -> float:
"""Entry can be a dict (old API) or a snippet object (new API)."""
if isinstance(entry, dict):
return float(entry.get("start", 0.0) or 0.0)
return float(getattr(entry, "start", 0.0) or 0.0)
# -------------------------
# Tokenization
# -------------------------
def tokenize_japanese(text: str) -> list[str]:
"""
Japanese tokenization using fugashi (MeCab wrapper).
Recommended install: pip install "fugashi[unidic-lite]"
"""
try:
from fugashi import Tagger
except ImportError as e:
raise RuntimeError('Japanese requires fugashi. Install: pip install "fugashi[unidic-lite]"') from e
tagger = Tagger()
return [w.surface for w in tagger(text)]
def tokenize_spanish(text: str, raw: bool = False) -> list[str]:
"""
Lightweight Spanish tokenization (keeps accented letters).
If raw=False, lowercases everything.
"""
tokens = re.findall(r"\b[\wáéíóúñü]+\b", text)
return tokens if raw else [t.lower() for t in tokens]
def count_words(tokens: list[str], lang_code: str, remove_stopwords: bool = True) -> Counter:
if remove_stopwords:
sw = STOPWORDS.get(lang_code, set())
tokens = [t for t in tokens if t not in sw]
return Counter(tokens)
# -------------------------
# Main
# -------------------------
def main() -> int:
parser = argparse.ArgumentParser(
description="Extract vocab or timestamped lines from a YouTube transcript."
)
parser.add_argument("lang", choices=["jp", "es"], help="Language code (jp or es).")
parser.add_argument("video", help="YouTube video URL or ID")
parser.add_argument(
"--mode",
choices=["vocab", "sentences"],
default="vocab",
help="Mode: vocab (word counts) or sentences (timestamped lines)",
)
parser.add_argument("--top", type=int, default=None, help="Top N words (vocab mode only)")
parser.add_argument("--no-stopwords", action="store_true", help="Don't remove common words")
parser.add_argument(
"--raw",
action="store_true",
help="(Spanish only) Do not lowercase tokens",
)
args = parser.parse_args()
lang_code = LANG_MAP[args.lang]
video_id = extract_video_id(args.video)
try:
transcript = fetch_transcript(video_id, lang_code)
except Exception as e:
print(f"Error fetching transcript: {e}", file=sys.stderr)
return 1
if args.mode == "sentences":
for entry in transcript:
start = snippet_start(entry)
text = snippet_text(entry).replace("\n", " ").strip()
if text:
print(f"[{start:.2f}s] {text}")
return 0
# vocab mode
text = " ".join(snippet_text(entry) for entry in transcript).replace("\n", " ")
if lang_code == "ja":
tokens = tokenize_japanese(text)
else:
tokens = tokenize_spanish(text, raw=args.raw)
counts = count_words(tokens, lang_code, remove_stopwords=not args.no_stopwords)
items = counts.most_common(args.top) if args.top else counts.most_common()
for word, count in items:
print(f"{word}: {count}")
return 0
if __name__ == "__main__":
raise SystemExit(main())