Compare commits

8 Commits

Author SHA1 Message Date
Kristofer Söderström 7d50d5f4cf QOL improvements 2023-11-06 09:57:44 +01:00
Kristofer Söderström 7799d03960 bug fixes 2023-11-06 09:31:53 +01:00
Kristofer Rolf Söderström f88186dacc Update app.py 2023-10-19 09:26:43 +02:00
Kristofer Rolf Söderström 3f5c1491ac Delete build.zip 2023-10-19 09:20:55 +02:00
Kristofer Rolf Söderström c83e15bdba Update README.md 2023-10-19 09:20:29 +02:00
Kristofer Rolf Söderström ff16ad30e1 Merge pull request #2 from ValentinFunk/patch-1
Fix mac instructions link
2023-10-19 09:09:01 +02:00
Valentin 622165b3e6 Update Mac_instructions.md 2023-09-08 10:11:02 +02:00
Valentin 0e9cbdca58 Fix mac instructions link 2023-09-08 10:09:15 +02:00
6 changed files with 18 additions and 30 deletions
+2 -2
View File
@@ -5,5 +5,5 @@ Unfortunately, I have not found a permament solution for this, not being a Mac u
1. You can also right-click (or equivalent) on the root folder to open a Terminal within the folder.
2. Run the following command:
```
python main.py
```
python app.py
```
+3 -9
View File
@@ -8,7 +8,6 @@ Local Transcribe with Whisper is a user-friendly desktop application that allows
3. Model selection: Now a dropdown option that includes most models for typical use.
2. New and improved GUI.
![python GUI.py](images/gui-windows.png)
3. Executable: On Windows and don't want to install python? Try the Exe file! See below for instructions (Experimental)
## Features
* Select the folder containing the audio or video files you want to transcribe. Tested with m4a video.
@@ -27,11 +26,6 @@ Or by cloning the repository with:
```
git clone https://github.com/soderstromkr/transcribe.git
```
### Executable Version **(Experimental. Windows only)**
The executable version of Local Transcribe with Whisper is a standalone program and should work out of the box. This experimental version is available if you have Windows, and do not have (or don't want to install) python and additional dependencies. However, it requires more disk space (around 1Gb), has no GPU acceleration and has only been lightly tested for bugs, etc. Let me know if you run into any issues!
1. Download the project folder. As the image above shows.
2. Find and unzip build.zip (get a coffee or a tea, this might take a while depending on your computer)
3. Run the executable (app.exe) file.
### Python Version **(any platform including Mac users)**
This is recommended if you don't have Windows. Have Windows and use python, or want to use GPU acceleration (Pytorch and Cuda) for faster transcriptions. I would generally recommend this method anyway, but I can understand not everyone wants to go through the installation process for Python, Anaconda and the other required packages.
1. This script was made and tested in an Anaconda environment with Python 3.10. I recommend this method if you're not familiar with Python.
@@ -45,9 +39,9 @@ conda install -c conda-forge ffmpeg-python
```
pip install -U openai-whisper
```
4. To run the app built on TKinter and TTKthemes. If using these options, make sure they are installed in your Python build. You can install them via pip.
4. To run the app built on TKinter and TTKthemes. If using these options, make sure they are installed in your Python build. You can install them and colorama via pip.
```
pip install tkinter
pip install colorama
```
and
```
@@ -55,7 +49,7 @@ pip install customtkinter
```
5. Run the app:
1. For **Windows**: In the same folder as the *app.py* file, run the app from terminal by running ```python app.py``` or with the batch file called run_Windows.bat (for Windows users), which assumes you have conda installed and in the base environment (This is for simplicity, but users are usually adviced to create an environment, see [here](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-with-commands) for more info) just make sure you have the correct environment (right click on the file and press edit to make any changes). If you want to download a model first, and then go offline for transcription, I recommend running the model with the default sample folder, which will download the model locally.
2. For **Mac**: Haven't figured out a better way to do this, see [the instructions here](Mac_instructions.txt)
2. For **Mac**: Haven't figured out a better way to do this, see [the instructions here](Mac_instructions.md)
## Usage
1. When launched, the app will also open a terminal that shows some additional information.
2. Select the folder containing the audio or video files you want to transcribe by clicking the "Browse" button next to the "Folder" label. This will open a file dialog where you can navigate to the desired folder. Remember, you won't be choosing individual files but whole folders!
+12 -13
View File
@@ -5,9 +5,10 @@ from tkinter import messagebox
from src._LocalTranscribe import transcribe, get_path
import customtkinter
import threading
from colorama import Back, Fore
from colorama import Back
import colorama
colorama.init(autoreset=True)
import os
@@ -41,7 +42,8 @@ class App:
language_frame.pack(fill=tk.BOTH, padx=10, pady=10)
customtkinter.CTkLabel(language_frame, text="Language:", font=font).pack(side=tk.LEFT, padx=5)
self.language_entry = customtkinter.CTkEntry(language_frame, width=50, font=('Roboto', 12, 'italic'))
self.language_entry.insert(0, 'Select language or clear to detect automatically')
self.default_language_text = "Enter language (or ignore to auto-detect)"
self.language_entry.insert(0, self.default_language_text)
self.language_entry.bind('<FocusIn>', on_entry_click)
self.language_entry.pack(side=tk.LEFT, fill=tk.X, expand=True)
# Model frame
@@ -72,7 +74,8 @@ class App:
# Helper functions
# Browsing
def browse(self):
folder_path = filedialog.askdirectory()
initial_dir = os.getcwd()
folder_path = filedialog.askdirectory(initialdir=initial_dir)
self.path_entry.delete(0, tk.END)
self.path_entry.insert(0, folder_path)
# Start transcription
@@ -85,29 +88,25 @@ class App:
def transcribe_thread(self):
path = self.path_entry.get()
model = self.model_combobox.get()
language = self.language_entry.get() or None
language = self.language_entry.get()
# Check if the language field has the default text or is empty
if language == self.default_language_text or not language.strip():
language = None # This is the same as passing nothing
verbose = self.verbose_var.get()
# Show progress bar
self.progress_bar.pack(fill=tk.X, padx=5, pady=5)
self.progress_bar.start()
# Setting path and files
glob_file = get_path(path)
info_path = 'I will transcribe all eligible audio/video files in the path: {}\n\nContinue?'.format(path)
answer = messagebox.askyesno("Confirmation", info_path)
if not answer:
self.progress_bar.stop()
self.progress_bar.pack_forget()
self.transcribe_button.configure(state=tk.NORMAL)
return
messagebox.showinfo("Message", "Starting transcription!")
# Start transcription
error_language = 'https://github.com/openai/whisper#available-models-and-languages'
try:
output_text = transcribe(path, glob_file, model, language, verbose)
except UnboundLocalError:
messagebox.showinfo("Files not found error!", 'Nothing found, choose another folder.')
pass
except ValueError:
messagebox.showinfo("Language error!", 'See {} for supported languages'.format(error_language))
messagebox.showinfo("Invalid language name, you might have to clear the default text to continue!")
# Hide progress bar
self.progress_bar.stop()
self.progress_bar.pack_forget()
-3
View File
@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b361c4993eceb2006f225ffdd2f7b63265586e3dded351972dfcd5e5d75559c7
size 249467977
@@ -1,4 +1,2 @@
Armstrong_Small_Step
[0:00:00 --> 0:00:07]: And they're still brought to land now.
[0:00:07 --> 0:00:18]: It's one small step for man.
[0:00:18 --> 0:00:23]: One by a fleet for man time.
[0:00:00 --> 0:00:29.360000]: alumnfeldaguyrjarna om det nya skirprå kızım om det där föddarna hatt splittar, do nackrott,
Binary file not shown.