Revamp: embedded console, faster-whisper, simplified install
This commit is contained in:
1
.gitignore
vendored
1
.gitignore
vendored
@@ -7,6 +7,7 @@ __pycache__/
|
|||||||
venv/
|
venv/
|
||||||
env/
|
env/
|
||||||
ENV/
|
ENV/
|
||||||
|
.venv/
|
||||||
|
|
||||||
# IDE
|
# IDE
|
||||||
.vscode/
|
.vscode/
|
||||||
|
|||||||
@@ -1,9 +1,31 @@
|
|||||||
### How to run on Mac
|
### How to run on Mac / Linux
|
||||||
Unfortunately, I have not found a permament solution for this, not being a Mac user has limited the ways I can test this.
|
|
||||||
#### Recommended steps
|
#### Quick start
|
||||||
1. Open a terminal and navigate to the root folder (the downloaded the folder).
|
1. Open Terminal and navigate to the project folder (or right-click the folder and select "Open in Terminal").
|
||||||
1. You can also right-click (or equivalent) on the root folder to open a Terminal within the folder.
|
2. Make the script executable (only needed once):
|
||||||
2. Run the following command:
|
|
||||||
```
|
```
|
||||||
python app.py
|
chmod +x run_Mac.sh
|
||||||
```
|
```
|
||||||
|
3. Run it:
|
||||||
|
```
|
||||||
|
./run_Mac.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
This will automatically:
|
||||||
|
- Create a virtual environment (`.venv`)
|
||||||
|
- Install all dependencies (no admin rights needed)
|
||||||
|
- Launch the app
|
||||||
|
|
||||||
|
#### Manual steps (alternative)
|
||||||
|
If you prefer to do it manually:
|
||||||
|
```
|
||||||
|
python3 -m venv .venv
|
||||||
|
.venv/bin/python install.py
|
||||||
|
.venv/bin/python app.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Notes
|
||||||
|
- **Python 3.10+** is required. macOS users can install it from [python.org](https://www.python.org/downloads/) or via `brew install python`.
|
||||||
|
- **No FFmpeg install needed** — audio decoding is bundled.
|
||||||
|
- **GPU acceleration** is not available on macOS (Apple Silicon MPS is not supported by CTranslate2). CPU with int8 quantization is still fast.
|
||||||
|
- On Apple Silicon (M1/M2/M3/M4), the `small` or `base` models run well. `medium` works but is slower.
|
||||||
|
|||||||
92
README.md
92
README.md
@@ -1,18 +1,24 @@
|
|||||||
## Local Transcribe with Whisper
|
## Local Transcribe with Whisper
|
||||||
Local Transcribe with Whisper is a user-friendly desktop application that allows you to transcribe audio and video files using the Whisper ASR system. This application provides a graphical user interface (GUI) built with Python and the Tkinter library, making it easy to use even for those not familiar with programming.
|
|
||||||
|
|
||||||
## New in version 1.2!
|
> **⚠ Note for Mac users (Apple Silicon):** This version uses `faster-whisper` (CTranslate2), which does **not** support Apple M-chip GPU acceleration. Transcription will run on CPU, which is slower than OpenAI's Whisper with Metal/CoreML support. The trade-off is a much simpler installation — no conda, no PyTorch, no admin rights. If you'd prefer M-chip GPU acceleration and don't mind a more involved setup, switch to the **classic** release:
|
||||||
1. Simpler usage:
|
> ```
|
||||||
1. File type: You no longer need to specify file type. The program will only transcribe elligible files.
|
> git checkout classic
|
||||||
2. Language: Added option to specify language, which might help in some cases. Clear the default text to run automatic language recognition.
|
> ```
|
||||||
3. Model selection: Now a dropdown option that includes most models for typical use.
|
|
||||||
2. New and improved GUI.
|
Local Transcribe with Whisper is a user-friendly desktop application that allows you to transcribe audio and video files using the Whisper ASR system, powered by [faster-whisper](https://github.com/SYSTRAN/faster-whisper) (CTranslate2). This application provides a graphical user interface (GUI) built with Python and the Tkinter library, making it easy to use even for those not familiar with programming.
|
||||||

|
|
||||||
|
## New in version 2.0!
|
||||||
|
1. **Switched to faster-whisper** — up to 4× faster transcription with lower memory usage.
|
||||||
|
2. **No separate FFmpeg installation needed** — audio decoding is handled by the bundled PyAV library.
|
||||||
|
3. **No admin rights required** — a plain `pip install` covers everything.
|
||||||
|
4. **No PyTorch dependency** — dramatically smaller install footprint.
|
||||||
|
5. **`tiny` model added** — smallest and fastest option for quick drafts.
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
* Select the folder containing the audio or video files you want to transcribe. Tested with m4a video.
|
* Select the folder containing the audio or video files you want to transcribe. Tested with m4a video.
|
||||||
* Choose the language of the files you are transcribing. You can either select a specific language or let the application automatically detect the language.
|
* Choose the language of the files you are transcribing. You can either select a specific language or let the application automatically detect the language.
|
||||||
* Select the Whisper model to use for the transcription. Available models include "base.en", "base", "small.en", "small", "medium.en", "medium", and "large". Models with .en ending are better if you're transcribing English, especially the base and small models.
|
* Select the Whisper model to use for the transcription. Available models include "tiny", "tiny.en", "base", "base.en", "small", "small.en", "medium", "medium.en", "large-v2", and "large-v3". Models with .en ending are better if you're transcribing English, especially the base and small models.
|
||||||
|
* **Swedish-optimised models** — [KB-Whisper](https://huggingface.co/collections/KBLab/kb-whisper) from the National Library of Sweden (KBLab) is available in all sizes (tiny → large). These models reduce Word Error Rate by up to 47 % compared to OpenAI Whisper on Swedish speech. The language is set to Swedish automatically when a KB model is selected.
|
||||||
* Enable the verbose mode to receive detailed information during the transcription process.
|
* Enable the verbose mode to receive detailed information during the transcription process.
|
||||||
* Monitor the progress of the transcription with the progress bar and terminal.
|
* Monitor the progress of the transcription with the progress bar and terminal.
|
||||||
* Confirmation dialog before starting the transcription to ensure you have selected the correct folder.
|
* Confirmation dialog before starting the transcription to ensure you have selected the correct folder.
|
||||||
@@ -27,66 +33,58 @@ Or by cloning the repository with:
|
|||||||
git clone https://github.com/soderstromkr/transcribe.git
|
git clone https://github.com/soderstromkr/transcribe.git
|
||||||
```
|
```
|
||||||
### Python Version **(any platform including Mac users)**
|
### Python Version **(any platform including Mac users)**
|
||||||
1. This script was made and tested in an Anaconda environment with Python 3.10. I recommend miniconda for a smaller installation, and if you're not familiar with Python.
|
1. Install Python 3.10 or later. You can download it from [python.org](https://www.python.org/downloads/). During installation, **check "Add Python to PATH"**. No administrator rights are needed if you install for your user only.
|
||||||
See [here](https://docs.anaconda.com/free/miniconda/miniconda-install/) for instructions. You will **need administrator rights**.
|
|
||||||
2. Whisper also requires some additional libraries. The [setup](https://github.com/openai/whisper#setup) page states: "The codebase also depends on a few Python packages, most notably HuggingFace Transformers for their fast tokenizer implementation and ffmpeg-python for reading audio files."
|
|
||||||
Users might not need to specifically install Transfomers. However, a conda installation might be needed for ffmpeg[^1], which takes care of setting up PATH variables.
|
|
||||||
|
|
||||||
From the Anaconda Prompt (which should now be installed in your system, find it with the search function), type or copy the following:
|
2. Run the installer. Open a terminal (Command Prompt on Windows, Terminal on Mac/Linux) in the project folder and run:
|
||||||
```
|
```
|
||||||
conda install -c conda-forge ffmpeg-python
|
python install.py
|
||||||
```
|
```
|
||||||
You can also choose not to use Anaconda (or miniconda), and use Python. In that case, you need to [download and install FFMPEG](https://ffmpeg.org/download.html) (and potentially add it to your PATH). See here for [WikiHow instructions](https://www.wikihow.com/Install-FFmpeg-on-Windows)
|
This will:
|
||||||
|
- Install all required packages (including bundled FFmpeg — no separate install needed)
|
||||||
|
- **Auto-detect your NVIDIA GPU** and ask if you want GPU acceleration
|
||||||
|
- No conda, no admin rights required
|
||||||
|
|
||||||
3. The main functionality comes from openai-whisper. See their [page](https://github.com/openai/whisper) for details. It also uses some additional packages (colorama, and customtkinter), install them with the following command:
|
Alternatively, you can install manually with `pip install -r requirements.txt`.
|
||||||
```
|
|
||||||
pip install -r requirements.txt
|
|
||||||
```
|
|
||||||
4. Run the app:
|
|
||||||
1. For **Windows**: In the same folder as the *app.py* file, run the app from Anaconda prompt by running
|
|
||||||
```python app.py```
|
|
||||||
or with the batch file called run_Windows.bat (for Windows users), which assumes you have conda installed and in the base environment (This is for simplicity, but users are usually adviced to create an environment, see [here](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-with-commands) for more info) just make sure you have the correct environment (right click on the file and press edit to make any changes).
|
|
||||||
3. For **Mac**: Haven't figured out a better way to do this, see [the instructions here](Mac_instructions.md)
|
|
||||||
|
|
||||||
**Note** If you want to download a model first, and then go offline for transcription, I recommend running the model with the default sample folder, which will download the model locally.
|
3. Run the app:
|
||||||
|
1. For **Windows**: double-click `run_Windows.bat` (it will auto-install on first run) or run:
|
||||||
|
```
|
||||||
|
python app.py
|
||||||
|
```
|
||||||
|
2. For **Mac / Linux**: run `./run_Mac.sh` (auto-installs on first run). See [Mac instructions](Mac_instructions.md) for details.
|
||||||
|
|
||||||
|
**Note** The first run with a given model will download it (~75 MB for base, ~500 MB for medium). After that, everything works offline.
|
||||||
|
|
||||||
## GPU Support
|
## GPU Support
|
||||||
This program **does support running on NVIDIA GPUs**, which can significantly speed up transcription times. To use GPU acceleration, you need to have the correct version of PyTorch installed with CUDA support.
|
This program **does support running on NVIDIA GPUs**, which can significantly speed up transcription times. faster-whisper uses CTranslate2, which requires NVIDIA CUDA libraries for GPU acceleration.
|
||||||
|
|
||||||
### Installing PyTorch with CUDA Support
|
### Automatic Detection
|
||||||
If you have an NVIDIA GPU and want to take advantage of GPU acceleration, you can install a CUDA-enabled version of PyTorch using:
|
The `install.py` script **automatically detects NVIDIA GPUs** and will ask if you want to install GPU support. If you skipped it during installation, you can add it anytime:
|
||||||
```
|
```
|
||||||
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
|
pip install nvidia-cublas-cu12 nvidia-cudnn-cu12
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note:** The command above installs PyTorch with CUDA 12.1 support. Make sure your NVIDIA GPU drivers are compatible with CUDA 12.1. You can check your CUDA version by running `nvidia-smi` in your terminal.
|
**Note:** Make sure your NVIDIA GPU drivers are up to date. You can check by running `nvidia-smi` in your terminal. The program will automatically detect and use your GPU if available, otherwise it falls back to CPU.
|
||||||
|
|
||||||
If you need a different CUDA version, visit the [PyTorch installation page](https://pytorch.org/get-started/locally/) to generate the appropriate installation command for your system.
|
|
||||||
|
|
||||||
### Verifying GPU Support
|
### Verifying GPU Support
|
||||||
After installation, you can verify that PyTorch can detect your GPU by running:
|
After installation, you can verify that your GPU is available by running:
|
||||||
```python
|
```python
|
||||||
import torch
|
import ctranslate2
|
||||||
print(torch.cuda.is_available()) # Should print True if GPU is available
|
print(ctranslate2.get_supported_compute_types("cuda"))
|
||||||
print(torch.cuda.get_device_name(0)) # Should print your GPU name
|
|
||||||
```
|
```
|
||||||
|
If this returns a list containing `"float16"`, GPU acceleration is working.
|
||||||
If GPU is not detected, the program will automatically fall back to CPU processing, though this will be slower.
|
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
1. When launched, the app will also open a terminal that shows some additional information.
|
1. Launch the app — the built-in console panel at the bottom shows a welcome message and all progress updates.
|
||||||
2. Select the folder containing the audio or video files you want to transcribe by clicking the "Browse" button next to the "Folder" label. This will open a file dialog where you can navigate to the desired folder. Remember, you won't be choosing individual files but whole folders!
|
2. Select the folder containing the audio or video files you want to transcribe by clicking the "Browse" button next to the "Folder" label. This will open a file dialog where you can navigate to the desired folder. Remember, you won't be choosing individual files but whole folders!
|
||||||
3. Enter the desired language for the transcription in the "Language" field. You can either select a language or leave it blank to enable automatic language detection.
|
3. Enter the desired language for the transcription in the "Language" field. You can either select a language or leave it blank to enable automatic language detection.
|
||||||
4. Choose the Whisper model to use for the transcription from the dropdown list next to the "Model" label.
|
4. Choose the Whisper model to use for the transcription from the dropdown list next to the "Model" label.
|
||||||
5. Enable the verbose mode by checking the "Verbose" checkbox if you want to receive detailed information during the transcription process.
|
5. Click the "Transcribe" button to start the transcription. The button will be disabled during the process to prevent multiple transcriptions at once.
|
||||||
6. Click the "Transcribe" button to start the transcription. The button will be disabled during the process to prevent multiple transcriptions at once.
|
6. Monitor progress in the embedded console panel — it shows model loading, per-file progress, and segment timestamps in real time.
|
||||||
7. Monitor the progress of the transcription with the progress bar.
|
7. Once the transcription is completed, a message box will appear displaying the result. Click "OK" to close it.
|
||||||
8. Once the transcription is completed, a message box will appear displaying the transcribed text. Click "OK" to close the message box.
|
8. You can run the application again or quit at any time by clicking the "Quit" button.
|
||||||
9. You can run the application again or quit the application at any time by clicking the "Quit" button.
|
|
||||||
|
|
||||||
## Jupyter Notebook
|
## Jupyter Notebook
|
||||||
Don't want fancy EXEs or GUIs? Use the function as is. See [example](example.ipynb) for an implementation on Jupyter Notebook.
|
Don't want fancy EXEs or GUIs? Use the function as is. See [example](example.ipynb) for an implementation on Jupyter Notebook.
|
||||||
|
|
||||||
[^1]: Advanced users can use ```pip install ffmpeg-python``` but be ready to deal with some [PATH issues](https://stackoverflow.com/questions/65836756/python-ffmpeg-wont-accept-path-why), which I encountered in Windows 11.
|
|
||||||
|
|
||||||
[](https://zenodo.org/badge/latestdoi/617404576)
|
[](https://zenodo.org/badge/latestdoi/617404576)
|
||||||
|
|||||||
102
app.py
102
app.py
@@ -1,3 +1,5 @@
|
|||||||
|
import os
|
||||||
|
import sys
|
||||||
import tkinter as tk
|
import tkinter as tk
|
||||||
from tkinter import ttk
|
from tkinter import ttk
|
||||||
from tkinter import filedialog
|
from tkinter import filedialog
|
||||||
@@ -5,10 +7,44 @@ from tkinter import messagebox
|
|||||||
from src._LocalTranscribe import transcribe, get_path
|
from src._LocalTranscribe import transcribe, get_path
|
||||||
import customtkinter
|
import customtkinter
|
||||||
import threading
|
import threading
|
||||||
from colorama import Back
|
|
||||||
import colorama
|
|
||||||
colorama.init(autoreset=True)
|
# ── Helper: redirect stdout/stderr into a CTkTextbox ──────────────────────
|
||||||
import os
|
import re
|
||||||
|
_ANSI_RE = re.compile(r'\x1b\[[0-9;]*m') # strip colour codes
|
||||||
|
|
||||||
|
class _ConsoleRedirector:
|
||||||
|
"""Redirects output exclusively to the in-app console panel."""
|
||||||
|
def __init__(self, text_widget):
|
||||||
|
self.widget = text_widget
|
||||||
|
|
||||||
|
def write(self, text):
|
||||||
|
clean = _ANSI_RE.sub('', text) # strip ANSI colours
|
||||||
|
if clean.strip() == '':
|
||||||
|
return
|
||||||
|
# Schedule UI update on the main thread
|
||||||
|
try:
|
||||||
|
self.widget.after(0, self._append, clean)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def _append(self, text):
|
||||||
|
self.widget.configure(state='normal')
|
||||||
|
self.widget.insert('end', text + ('\n' if not text.endswith('\n') else ''))
|
||||||
|
self.widget.see('end')
|
||||||
|
self.widget.configure(state='disabled')
|
||||||
|
|
||||||
|
def flush(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
# HuggingFace model IDs for non-standard models
|
||||||
|
HF_MODEL_MAP = {
|
||||||
|
'KB Swedish (tiny)': 'KBLab/kb-whisper-tiny',
|
||||||
|
'KB Swedish (base)': 'KBLab/kb-whisper-base',
|
||||||
|
'KB Swedish (small)': 'KBLab/kb-whisper-small',
|
||||||
|
'KB Swedish (medium)': 'KBLab/kb-whisper-medium',
|
||||||
|
'KB Swedish (large)': 'KBLab/kb-whisper-large',
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -18,7 +54,6 @@ firstclick = True
|
|||||||
|
|
||||||
class App:
|
class App:
|
||||||
def __init__(self, master):
|
def __init__(self, master):
|
||||||
print(Back.CYAN + "Welcome to Local Transcribe with Whisper!\U0001f600\nCheck back here to see some output from your transcriptions.\nDon't worry, they will also be saved on the computer!\U0001f64f")
|
|
||||||
self.master = master
|
self.master = master
|
||||||
# Change font
|
# Change font
|
||||||
font = ('Roboto', 13, 'bold') # Change the font and size here
|
font = ('Roboto', 13, 'bold') # Change the font and size here
|
||||||
@@ -28,6 +63,7 @@ class App:
|
|||||||
path_frame.pack(fill=tk.BOTH, padx=10, pady=10)
|
path_frame.pack(fill=tk.BOTH, padx=10, pady=10)
|
||||||
customtkinter.CTkLabel(path_frame, text="Folder:", font=font).pack(side=tk.LEFT, padx=5)
|
customtkinter.CTkLabel(path_frame, text="Folder:", font=font).pack(side=tk.LEFT, padx=5)
|
||||||
self.path_entry = customtkinter.CTkEntry(path_frame, width=50, font=font_b)
|
self.path_entry = customtkinter.CTkEntry(path_frame, width=50, font=font_b)
|
||||||
|
self.path_entry.insert(0, os.path.join(os.getcwd(), 'sample_audio'))
|
||||||
self.path_entry.pack(side=tk.LEFT, fill=tk.X, expand=True)
|
self.path_entry.pack(side=tk.LEFT, fill=tk.X, expand=True)
|
||||||
customtkinter.CTkButton(path_frame, text="Browse", command=self.browse, font=font).pack(side=tk.LEFT, padx=5)
|
customtkinter.CTkButton(path_frame, text="Browse", command=self.browse, font=font).pack(side=tk.LEFT, padx=5)
|
||||||
# Language frame
|
# Language frame
|
||||||
@@ -47,8 +83,13 @@ class App:
|
|||||||
self.language_entry.bind('<FocusIn>', on_entry_click)
|
self.language_entry.bind('<FocusIn>', on_entry_click)
|
||||||
self.language_entry.pack(side=tk.LEFT, fill=tk.X, expand=True)
|
self.language_entry.pack(side=tk.LEFT, fill=tk.X, expand=True)
|
||||||
# Model frame
|
# Model frame
|
||||||
models = ['base.en', 'base', 'small.en',
|
models = ['tiny', 'tiny.en', 'base', 'base.en',
|
||||||
'small', 'medium.en', 'medium', 'large']
|
'small', 'small.en', 'medium', 'medium.en',
|
||||||
|
'large-v2', 'large-v3',
|
||||||
|
'───────────────',
|
||||||
|
'KB Swedish (tiny)', 'KB Swedish (base)',
|
||||||
|
'KB Swedish (small)', 'KB Swedish (medium)',
|
||||||
|
'KB Swedish (large)']
|
||||||
model_frame = customtkinter.CTkFrame(master)
|
model_frame = customtkinter.CTkFrame(master)
|
||||||
model_frame.pack(fill=tk.BOTH, padx=10, pady=10)
|
model_frame.pack(fill=tk.BOTH, padx=10, pady=10)
|
||||||
customtkinter.CTkLabel(model_frame, text="Model:", font=font).pack(side=tk.LEFT, padx=5)
|
customtkinter.CTkLabel(model_frame, text="Model:", font=font).pack(side=tk.LEFT, padx=5)
|
||||||
@@ -56,13 +97,8 @@ class App:
|
|||||||
self.model_combobox = customtkinter.CTkComboBox(
|
self.model_combobox = customtkinter.CTkComboBox(
|
||||||
model_frame, width=50, state="readonly",
|
model_frame, width=50, state="readonly",
|
||||||
values=models, font=font_b)
|
values=models, font=font_b)
|
||||||
self.model_combobox.set(models[1]) # Set the default value
|
self.model_combobox.set('medium') # Set the default value
|
||||||
self.model_combobox.pack(side=tk.LEFT, fill=tk.X, expand=True)
|
self.model_combobox.pack(side=tk.LEFT, fill=tk.X, expand=True)
|
||||||
# Verbose frame
|
|
||||||
verbose_frame = customtkinter.CTkFrame(master)
|
|
||||||
verbose_frame.pack(fill=tk.BOTH, padx=10, pady=10)
|
|
||||||
self.verbose_var = tk.BooleanVar()
|
|
||||||
customtkinter.CTkCheckBox(verbose_frame, text="Output transcription to terminal", variable=self.verbose_var, font=font).pack(side=tk.LEFT, padx=5)
|
|
||||||
# Progress Bar
|
# Progress Bar
|
||||||
self.progress_bar = ttk.Progressbar(master, length=200, mode='indeterminate')
|
self.progress_bar = ttk.Progressbar(master, length=200, mode='indeterminate')
|
||||||
# Button actions frame
|
# Button actions frame
|
||||||
@@ -71,6 +107,23 @@ class App:
|
|||||||
self.transcribe_button = customtkinter.CTkButton(button_frame, text="Transcribe", command=self.start_transcription, font=font)
|
self.transcribe_button = customtkinter.CTkButton(button_frame, text="Transcribe", command=self.start_transcription, font=font)
|
||||||
self.transcribe_button.pack(side=tk.LEFT, padx=5, pady=10, fill=tk.X, expand=True)
|
self.transcribe_button.pack(side=tk.LEFT, padx=5, pady=10, fill=tk.X, expand=True)
|
||||||
customtkinter.CTkButton(button_frame, text="Quit", command=master.quit, font=font).pack(side=tk.RIGHT, padx=5, pady=10, fill=tk.X, expand=True)
|
customtkinter.CTkButton(button_frame, text="Quit", command=master.quit, font=font).pack(side=tk.RIGHT, padx=5, pady=10, fill=tk.X, expand=True)
|
||||||
|
|
||||||
|
# ── Embedded console / log panel ──────────────────────────────────
|
||||||
|
log_label = customtkinter.CTkLabel(master, text="Console output", font=font, anchor='w')
|
||||||
|
log_label.pack(fill=tk.X, padx=12, pady=(8, 0))
|
||||||
|
self.log_box = customtkinter.CTkTextbox(master, height=220, font=('Consolas', 14),
|
||||||
|
wrap='word', state='disabled',
|
||||||
|
fg_color='#1e1e1e', text_color='#e0e0e0')
|
||||||
|
self.log_box.pack(fill=tk.BOTH, expand=True, padx=10, pady=(2, 10))
|
||||||
|
|
||||||
|
# Redirect stdout & stderr into the log panel (no backend console)
|
||||||
|
sys.stdout = _ConsoleRedirector(self.log_box)
|
||||||
|
sys.stderr = _ConsoleRedirector(self.log_box)
|
||||||
|
|
||||||
|
# Welcome message (shown after redirect so it appears in the panel)
|
||||||
|
print("Welcome to Local Transcribe with Whisper! \U0001f600")
|
||||||
|
print("Transcriptions will be saved automatically.")
|
||||||
|
print("─" * 46)
|
||||||
# Helper functions
|
# Helper functions
|
||||||
# Browsing
|
# Browsing
|
||||||
def browse(self):
|
def browse(self):
|
||||||
@@ -87,12 +140,22 @@ class App:
|
|||||||
# Threading
|
# Threading
|
||||||
def transcribe_thread(self):
|
def transcribe_thread(self):
|
||||||
path = self.path_entry.get()
|
path = self.path_entry.get()
|
||||||
model = self.model_combobox.get()
|
model_display = self.model_combobox.get()
|
||||||
|
# Ignore the visual separator
|
||||||
|
if model_display.startswith('─'):
|
||||||
|
messagebox.showinfo("Invalid selection", "Please select a model, not the separator line.")
|
||||||
|
self.transcribe_button.configure(state=tk.NORMAL)
|
||||||
|
return
|
||||||
|
model = HF_MODEL_MAP.get(model_display, model_display)
|
||||||
language = self.language_entry.get()
|
language = self.language_entry.get()
|
||||||
|
# Auto-set Swedish for KB models
|
||||||
|
is_kb_model = model_display.startswith('KB Swedish')
|
||||||
# Check if the language field has the default text or is empty
|
# Check if the language field has the default text or is empty
|
||||||
if language == self.default_language_text or not language.strip():
|
if is_kb_model:
|
||||||
|
language = 'sv'
|
||||||
|
elif language == self.default_language_text or not language.strip():
|
||||||
language = None # This is the same as passing nothing
|
language = None # This is the same as passing nothing
|
||||||
verbose = self.verbose_var.get()
|
verbose = True # always show transcription progress in the console panel
|
||||||
# Show progress bar
|
# Show progress bar
|
||||||
self.progress_bar.pack(fill=tk.X, padx=5, pady=5)
|
self.progress_bar.pack(fill=tk.X, padx=5, pady=5)
|
||||||
self.progress_bar.start()
|
self.progress_bar.start()
|
||||||
@@ -122,9 +185,10 @@ if __name__ == "__main__":
|
|||||||
# Setting custom themes
|
# Setting custom themes
|
||||||
root = customtkinter.CTk()
|
root = customtkinter.CTk()
|
||||||
root.title("Local Transcribe with Whisper")
|
root.title("Local Transcribe with Whisper")
|
||||||
# Geometry
|
# Geometry — taller to accommodate the embedded console panel
|
||||||
width,height = 450,275
|
width, height = 550, 560
|
||||||
root.geometry('{}x{}'.format(width,height))
|
root.geometry('{}x{}'.format(width, height))
|
||||||
|
root.minsize(450, 480)
|
||||||
# Icon
|
# Icon
|
||||||
root.iconbitmap('images/icon.ico')
|
root.iconbitmap('images/icon.ico')
|
||||||
# Run
|
# Run
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
from cx_Freeze import setup, Executable
|
from cx_Freeze import setup, Executable
|
||||||
|
|
||||||
build_exe_options = {
|
build_exe_options = {
|
||||||
"packages": ['whisper','tkinter','customtkinter']
|
"packages": ['faster_whisper','tkinter','customtkinter']
|
||||||
}
|
}
|
||||||
executables = (
|
executables = (
|
||||||
[
|
[
|
||||||
@@ -13,7 +13,7 @@ executables = (
|
|||||||
)
|
)
|
||||||
setup(
|
setup(
|
||||||
name="Local Transcribe with Whisper",
|
name="Local Transcribe with Whisper",
|
||||||
version="1.2",
|
version="2.0",
|
||||||
author="Kristofer Rolf Söderström",
|
author="Kristofer Rolf Söderström",
|
||||||
options={"build_exe":build_exe_options},
|
options={"build_exe":build_exe_options},
|
||||||
executables=executables
|
executables=executables
|
||||||
|
|||||||
128
install.py
Normal file
128
install.py
Normal file
@@ -0,0 +1,128 @@
|
|||||||
|
"""
|
||||||
|
Installer script for Local Transcribe with Whisper.
|
||||||
|
Detects NVIDIA GPU and offers to install GPU acceleration support.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python install.py
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import shutil
|
||||||
|
import site
|
||||||
|
|
||||||
|
|
||||||
|
def detect_nvidia_gpu():
|
||||||
|
"""Check if an NVIDIA GPU is present."""
|
||||||
|
candidates = [
|
||||||
|
shutil.which("nvidia-smi"),
|
||||||
|
r"C:\Windows\System32\nvidia-smi.exe",
|
||||||
|
r"C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe",
|
||||||
|
]
|
||||||
|
for path in candidates:
|
||||||
|
if not path or not os.path.isfile(path):
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
r = subprocess.run(
|
||||||
|
[path, "--query-gpu=name", "--format=csv,noheader"],
|
||||||
|
capture_output=True, text=True, timeout=10,
|
||||||
|
)
|
||||||
|
if r.returncode == 0 and r.stdout.strip():
|
||||||
|
return True, r.stdout.strip().split("\n")[0]
|
||||||
|
except Exception:
|
||||||
|
continue
|
||||||
|
return False, None
|
||||||
|
|
||||||
|
|
||||||
|
def pip_install(*packages):
|
||||||
|
cmd = [sys.executable, "-m", "pip", "install"] + list(packages)
|
||||||
|
print(f"\n> {' '.join(cmd)}\n")
|
||||||
|
subprocess.check_call(cmd)
|
||||||
|
|
||||||
|
|
||||||
|
def get_site_packages():
|
||||||
|
for p in site.getsitepackages():
|
||||||
|
if p.endswith("site-packages"):
|
||||||
|
return p
|
||||||
|
return site.getsitepackages()[0]
|
||||||
|
|
||||||
|
|
||||||
|
def create_nvidia_pth():
|
||||||
|
"""Create a .pth startup hook that registers NVIDIA DLL directories."""
|
||||||
|
sp = get_site_packages()
|
||||||
|
pth_path = os.path.join(sp, "nvidia_cuda_path.pth")
|
||||||
|
# This one-liner runs at Python startup, before any user code.
|
||||||
|
pth_content = (
|
||||||
|
"import os, glob as g; "
|
||||||
|
"any(os.add_dll_directory(d) or os.environ.__setitem__('PATH', d + os.pathsep + os.environ.get('PATH','')) "
|
||||||
|
"for d in g.glob(os.path.join(r'" + sp.replace("'", "\\'") + "', 'nvidia', '*', 'bin')) "
|
||||||
|
"+ g.glob(os.path.join(r'" + sp.replace("'", "\\'") + "', 'nvidia', '*', 'lib')) "
|
||||||
|
"if os.path.isdir(d)) if os.name == 'nt' else None\n"
|
||||||
|
)
|
||||||
|
with open(pth_path, "w") as f:
|
||||||
|
f.write(pth_content)
|
||||||
|
print(f" Created CUDA startup hook: {pth_path}")
|
||||||
|
|
||||||
|
|
||||||
|
def verify_cuda():
|
||||||
|
"""Verify CUDA works in a fresh subprocess."""
|
||||||
|
try:
|
||||||
|
r = subprocess.run(
|
||||||
|
[sys.executable, "-c",
|
||||||
|
"import ctranslate2; "
|
||||||
|
"print('float16' in ctranslate2.get_supported_compute_types('cuda'))"],
|
||||||
|
capture_output=True, text=True, timeout=30,
|
||||||
|
)
|
||||||
|
return r.stdout.strip() == "True"
|
||||||
|
except Exception:
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
print("=" * 55)
|
||||||
|
print(" Local Transcribe with Whisper — Installer")
|
||||||
|
print("=" * 55)
|
||||||
|
|
||||||
|
# Step 1: Base packages
|
||||||
|
print("\n[1/2] Installing base requirements...")
|
||||||
|
pip_install("-r", "requirements.txt")
|
||||||
|
print("\n Base requirements installed!")
|
||||||
|
|
||||||
|
# Step 2: GPU
|
||||||
|
print("\n[2/2] Checking for NVIDIA GPU...")
|
||||||
|
has_gpu, gpu_name = detect_nvidia_gpu()
|
||||||
|
|
||||||
|
if has_gpu:
|
||||||
|
print(f"\n NVIDIA GPU detected: {gpu_name}")
|
||||||
|
print(" GPU acceleration can make transcription 2-5x faster.")
|
||||||
|
print(" This will install ~300 MB of additional CUDA libraries.\n")
|
||||||
|
|
||||||
|
while True:
|
||||||
|
answer = input(" Install GPU support? [Y/n]: ").strip().lower()
|
||||||
|
if answer in ("", "y", "yes"):
|
||||||
|
print("\n Installing CUDA libraries...")
|
||||||
|
pip_install("nvidia-cublas-cu12", "nvidia-cudnn-cu12")
|
||||||
|
create_nvidia_pth()
|
||||||
|
print("\n Verifying CUDA...")
|
||||||
|
if verify_cuda():
|
||||||
|
print(" GPU support verified and working!")
|
||||||
|
else:
|
||||||
|
print(" WARNING: CUDA installed but not detected.")
|
||||||
|
print(" Update your NVIDIA drivers and try again.")
|
||||||
|
break
|
||||||
|
elif answer in ("n", "no"):
|
||||||
|
print("\n Skipping GPU. Re-run install.py to add it later.")
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
print(" Please enter Y or N.")
|
||||||
|
else:
|
||||||
|
print("\n No NVIDIA GPU detected — using CPU mode.")
|
||||||
|
|
||||||
|
print("\n" + "=" * 55)
|
||||||
|
print(" Done! Run the app with: python app.py")
|
||||||
|
print("=" * 55)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
@@ -1,3 +1,2 @@
|
|||||||
openai-whisper
|
faster-whisper
|
||||||
customtkinter
|
customtkinter
|
||||||
colorama
|
|
||||||
|
|||||||
29
run_Mac.sh
Normal file
29
run_Mac.sh
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# ============================================================
|
||||||
|
# Local Transcribe with Whisper — macOS / Linux launcher
|
||||||
|
# ============================================================
|
||||||
|
# Double-click this file or run: ./run_Mac.sh
|
||||||
|
# On first run it creates a venv and installs dependencies.
|
||||||
|
# ============================================================
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
cd "$(dirname "$0")"
|
||||||
|
|
||||||
|
# Create .venv if it doesn't exist
|
||||||
|
if [ ! -f ".venv/bin/python" ]; then
|
||||||
|
echo "Creating virtual environment..."
|
||||||
|
python3 -m venv .venv
|
||||||
|
fi
|
||||||
|
|
||||||
|
PYTHON=".venv/bin/python"
|
||||||
|
|
||||||
|
# Install dependencies on first run
|
||||||
|
if ! "$PYTHON" -c "import faster_whisper" 2>/dev/null; then
|
||||||
|
echo "First run detected — running installer..."
|
||||||
|
"$PYTHON" install.py
|
||||||
|
echo
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Starting Local Transcribe..."
|
||||||
|
"$PYTHON" app.py
|
||||||
@@ -1,5 +1,23 @@
|
|||||||
@echo off
|
@echo off
|
||||||
echo Starting...
|
REM Create .venv on first run if it doesn't exist
|
||||||
call conda activate base
|
if not exist ".venv\Scripts\python.exe" (
|
||||||
REM OPTION 2 : (KEEP TEXT WITHIN QUOTES AND CHANGE USERNAME) "C:/Users/user/Anaconda3/condabin/activate.bat"
|
echo Creating virtual environment...
|
||||||
call python app.py
|
python -m venv .venv
|
||||||
|
if errorlevel 1 (
|
||||||
|
echo ERROR: Failed to create virtual environment. Is Python installed and on PATH?
|
||||||
|
pause
|
||||||
|
exit /b 1
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
set PYTHON=.venv\Scripts\python.exe
|
||||||
|
|
||||||
|
REM Check if dependencies are installed
|
||||||
|
%PYTHON% -c "import faster_whisper" 2>nul
|
||||||
|
if errorlevel 1 (
|
||||||
|
echo First run detected - running installer...
|
||||||
|
%PYTHON% install.py
|
||||||
|
echo.
|
||||||
|
)
|
||||||
|
echo Starting Local Transcribe...
|
||||||
|
%PYTHON% app.py
|
||||||
@@ -1,2 +1,2 @@
|
|||||||
Armstrong_Small_Step
|
Armstrong_Small_Step
|
||||||
[0:00:00 --> 0:00:29.360000]: alumnfeldaguyrjarna om det nya skirprå kızım om det där föddarna hatt splittar, do nackrott,
|
[0:00:00 --> 0:00:07]: That's one small step for man, one giant leap for mankind.
|
||||||
@@ -1,4 +1,2 @@
|
|||||||
Axel_Pettersson_röstinspelning
|
Axel_Pettersson_röstinspelning
|
||||||
[0:00:00 --> 0:00:06]: Hej, jag heter Raxel Patterson, jag får att se över UR 1976.
|
[0:00:00 --> 0:00:15]: Hej, jag heter Axel Pettersson, jag föddes i Örebro 1976. Jag har varit Wikipedia sen 2008 och jag har översatt röstintroduktionsprojektet till svenska.
|
||||||
[0:00:06 --> 0:00:12.540000]: Jag har varit Wikipedia-périonsen 2018 och jag har översat röst-intro-
|
|
||||||
[0:00:12.540000 --> 0:00:15.540000]:-projektet till svenska.
|
|
||||||
@@ -1,11 +1,56 @@
|
|||||||
import os
|
import os
|
||||||
|
import sys
|
||||||
import datetime
|
import datetime
|
||||||
|
import site
|
||||||
from glob import glob
|
from glob import glob
|
||||||
import whisper
|
|
||||||
from torch import backends, cuda, Generator
|
# ---------------------------------------------------------------------------
|
||||||
import colorama
|
# CUDA setup — must happen before importing faster_whisper / ctranslate2
|
||||||
from colorama import Back,Fore
|
# ---------------------------------------------------------------------------
|
||||||
colorama.init(autoreset=True)
|
def _setup_cuda_dlls():
|
||||||
|
"""Add NVIDIA pip-package DLL dirs to the DLL search path (Windows only).
|
||||||
|
|
||||||
|
pip-installed nvidia-cublas-cu12 / nvidia-cudnn-cu12 place their .dll
|
||||||
|
files inside the site-packages tree. Python 3.8+ on Windows does NOT
|
||||||
|
search PATH for DLLs loaded via ctypes/LoadLibrary, so we must
|
||||||
|
explicitly register every nvidia/*/bin and nvidia/*/lib directory using
|
||||||
|
os.add_dll_directory *and* prepend them to PATH (some native extensions
|
||||||
|
still rely on PATH).
|
||||||
|
"""
|
||||||
|
if sys.platform != "win32":
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
for sp in site.getsitepackages():
|
||||||
|
nvidia_root = os.path.join(sp, "nvidia")
|
||||||
|
if not os.path.isdir(nvidia_root):
|
||||||
|
continue
|
||||||
|
for pkg in os.listdir(nvidia_root):
|
||||||
|
for sub in ("bin", "lib"):
|
||||||
|
d = os.path.join(nvidia_root, pkg, sub)
|
||||||
|
if os.path.isdir(d):
|
||||||
|
os.environ["PATH"] = d + os.pathsep + os.environ.get("PATH", "")
|
||||||
|
try:
|
||||||
|
os.add_dll_directory(d)
|
||||||
|
except (OSError, AttributeError):
|
||||||
|
pass
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
_setup_cuda_dlls()
|
||||||
|
|
||||||
|
from faster_whisper import WhisperModel
|
||||||
|
|
||||||
|
|
||||||
|
def _detect_device():
|
||||||
|
"""Return (device, compute_type) for the best available backend."""
|
||||||
|
try:
|
||||||
|
import ctranslate2
|
||||||
|
cuda_types = ctranslate2.get_supported_compute_types("cuda")
|
||||||
|
if "float16" in cuda_types:
|
||||||
|
return "cuda", "float16"
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
return "cpu", "int8"
|
||||||
|
|
||||||
|
|
||||||
# Get the path
|
# Get the path
|
||||||
@@ -16,12 +61,12 @@ def get_path(path):
|
|||||||
# Main function
|
# Main function
|
||||||
def transcribe(path, glob_file, model=None, language=None, verbose=False):
|
def transcribe(path, glob_file, model=None, language=None, verbose=False):
|
||||||
"""
|
"""
|
||||||
Transcribes audio files in a specified folder using OpenAI's Whisper model.
|
Transcribes audio files in a specified folder using faster-whisper (CTranslate2).
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
path (str): Path to the folder containing the audio files.
|
path (str): Path to the folder containing the audio files.
|
||||||
glob_file (list): List of audio file paths to transcribe.
|
glob_file (list): List of audio file paths to transcribe.
|
||||||
model (str, optional): Name of the Whisper model to use for transcription.
|
model (str, optional): Name of the Whisper model size to use for transcription.
|
||||||
Defaults to None, which uses the default model.
|
Defaults to None, which uses the default model.
|
||||||
language (str, optional): Language code for transcription. Defaults to None,
|
language (str, optional): Language code for transcription. Defaults to None,
|
||||||
which enables automatic language detection.
|
which enables automatic language detection.
|
||||||
@@ -38,59 +83,67 @@ def transcribe(path, glob_file, model=None, language=None, verbose=False):
|
|||||||
- The function downloads the specified model if not available locally.
|
- The function downloads the specified model if not available locally.
|
||||||
- The transcribed text files will be saved in a "transcriptions" folder
|
- The transcribed text files will be saved in a "transcriptions" folder
|
||||||
within the specified path.
|
within the specified path.
|
||||||
|
- Uses CTranslate2 for up to 4x faster inference compared to openai-whisper.
|
||||||
|
- FFmpeg is bundled via the PyAV dependency — no separate installation needed.
|
||||||
|
|
||||||
"""
|
"""
|
||||||
# Check for GPU acceleration and set device
|
SEP = "─" * 46
|
||||||
if backends.mps.is_available():
|
|
||||||
device = 'mps'
|
|
||||||
Generator('mps').manual_seed(42)
|
|
||||||
elif cuda.is_available():
|
|
||||||
device = 'cuda'
|
|
||||||
Generator('cuda').manual_seed(42)
|
|
||||||
else:
|
|
||||||
device = 'cpu'
|
|
||||||
Generator().manual_seed(42)
|
|
||||||
|
|
||||||
# Load model on the correct device
|
# ── Step 1: Detect hardware ──────────────────────────────────────
|
||||||
model = whisper.load_model(model, device=device)
|
device, compute_type = _detect_device()
|
||||||
# Start main loop
|
print(f"⚙ Device: {device} | Compute: {compute_type}")
|
||||||
files_transcripted=[]
|
|
||||||
|
# ── Step 2: Load model ───────────────────────────────────────────
|
||||||
|
print(f"⏳ Loading model '{model}' — downloading if needed...")
|
||||||
|
whisper_model = WhisperModel(model, device=device, compute_type=compute_type)
|
||||||
|
print("✅ Model ready!")
|
||||||
|
print(SEP)
|
||||||
|
|
||||||
|
# ── Step 3: Transcribe files ─────────────────────────────────────
|
||||||
|
total_files = len(glob_file)
|
||||||
|
print(f"📂 Found {total_files} item(s) in folder")
|
||||||
|
print(SEP)
|
||||||
|
|
||||||
|
files_transcripted = []
|
||||||
|
file_num = 0
|
||||||
for file in glob_file:
|
for file in glob_file:
|
||||||
title = os.path.basename(file).split('.')[0]
|
title = os.path.basename(file).split('.')[0]
|
||||||
print(Back.CYAN + '\nTrying to transcribe file named: {}\U0001f550'.format(title))
|
file_num += 1
|
||||||
|
print(f"\n{'─' * 46}")
|
||||||
|
print(f"📄 File {file_num}/{total_files}: {title}")
|
||||||
try:
|
try:
|
||||||
result = model.transcribe(
|
segments, info = whisper_model.transcribe(
|
||||||
file,
|
file,
|
||||||
language=language,
|
language=language,
|
||||||
verbose=verbose
|
beam_size=5
|
||||||
)
|
)
|
||||||
files_transcripted.append(result)
|
|
||||||
# Make folder if missing
|
# Make folder if missing
|
||||||
try:
|
os.makedirs('{}/transcriptions'.format(path), exist_ok=True)
|
||||||
os.makedirs('{}/transcriptions'.format(path), exist_ok=True)
|
# Stream segments as they are decoded
|
||||||
except FileExistsError:
|
segment_list = []
|
||||||
pass
|
with open("{}/transcriptions/{}.txt".format(path, title), 'w', encoding='utf-8') as f:
|
||||||
# Create segments for text files
|
f.write(title)
|
||||||
start = []
|
for seg in segments:
|
||||||
end = []
|
start_ts = str(datetime.timedelta(seconds=seg.start))
|
||||||
text = []
|
end_ts = str(datetime.timedelta(seconds=seg.end))
|
||||||
for segment in result['segments']:
|
f.write('\n[{} --> {}]:{}'.format(start_ts, end_ts, seg.text))
|
||||||
start.append(str(datetime.timedelta(seconds=segment['start'])))
|
f.flush()
|
||||||
end.append(str(datetime.timedelta(seconds=segment['end'])))
|
if verbose:
|
||||||
text.append(segment['text'])
|
print(" [%.2fs → %.2fs] %s" % (seg.start, seg.end, seg.text))
|
||||||
# Save files to transcriptions folder
|
else:
|
||||||
with open("{}/transcriptions/{}.txt".format(path, title), 'w', encoding='utf-8') as file:
|
print(" Transcribed up to %.0fs..." % seg.end, end='\r')
|
||||||
file.write(title)
|
segment_list.append(seg)
|
||||||
for i in range(len(result['segments'])):
|
print(f"✅ Done — saved to transcriptions/{title}.txt")
|
||||||
file.write('\n[{} --> {}]:{}'.format(start[i], end[i], text[i]))
|
files_transcripted.append(segment_list)
|
||||||
# Skip invalid files
|
except Exception:
|
||||||
except RuntimeError:
|
print('⚠ Not a valid audio/video file, skipping.')
|
||||||
print(Fore.RED + 'Not a valid file, skipping.')
|
|
||||||
pass
|
# ── Summary ──────────────────────────────────────────────────────
|
||||||
# Check if any files were processed.
|
print(f"\n{SEP}")
|
||||||
if len(files_transcripted) > 0:
|
if len(files_transcripted) > 0:
|
||||||
output_text = 'Finished transcription, {} files can be found in {}/transcriptions'.format(len(files_transcripted), path)
|
output_text = f"✅ Finished! {len(files_transcripted)} file(s) transcribed.\n Saved in: {path}/transcriptions"
|
||||||
else:
|
else:
|
||||||
output_text = 'No files elligible for transcription, try adding audio or video files to this folder or choose another folder!'
|
output_text = '⚠ No files eligible for transcription — try another folder.'
|
||||||
# Return output text
|
print(output_text)
|
||||||
|
print(SEP)
|
||||||
return output_text
|
return output_text
|
||||||
|
|||||||
Reference in New Issue
Block a user