dedicated windows and mac scripts, fixed verbose checkbox
This commit is contained in:
@@ -5,7 +5,7 @@ authors:
|
||||
given-names: "Kristofer Rolf"
|
||||
orcid: "https://orcid.org/0000-0002-5322-3350"
|
||||
title: "transcribe"
|
||||
version: 1.0
|
||||
version: 1.1.0
|
||||
doi: 10.5281/zenodo.7760511
|
||||
date-released: 2023-03-22
|
||||
url: "https://github.com/soderstromkr/transcribe"
|
||||
|
||||
1
GUI.py
1
GUI.py
@@ -14,6 +14,7 @@ class App:
|
||||
self.master = master
|
||||
master.title("Local Transcribe")
|
||||
|
||||
#style options
|
||||
style = ttk.Style()
|
||||
style.configure('TLabel', font=('Arial', 10), padding=10)
|
||||
style.configure('TEntry', font=('Arial', 10), padding=10)
|
||||
|
||||
6
Mac_2_instructions.txt
Normal file
6
Mac_2_instructions.txt
Normal file
@@ -0,0 +1,6 @@
|
||||
### Steps to make command file executable
|
||||
To make a file executable on a Mac, you need to open a terminal window in the directory where the file is located. Then run the following command:
|
||||
|
||||
chmod +x run_MAC_2.command
|
||||
|
||||
After running this command, the file should be marked as executable and you should be able to run it by double-clicking on it.
|
||||
@@ -31,13 +31,12 @@ git clone https://github.com/soderstromkr/transcribe.git
|
||||
```
|
||||
and use the example.ipynb template to use the script.
|
||||
**OR** download the ```transcribe.py``` file into your work folder. Then you can either import it to another script or notebook for use. I recommend jupyter notebook for new users, see the example below. (Remember to have transcribe.py and example.ipynb in the same working folder).
|
||||
##### Example with jupyter notebook
|
||||
#### Example with jupyter notebook
|
||||
See [example](example.ipynb) for an implementation on jupyter notebook, also added an example for a simple [workaround](example_no_internet.ipynb) to transcribe while offline.
|
||||
##### Using the GUI
|
||||
#### Using the GUI
|
||||
You can also run the GUI version from your terminal running ```python GUI.py``` or with the batch file called run_gui.bat, just make sure to add your conda path to it. If you want to download a model first, and then go offline for transcription, I recommend running the model with the default sample folder, which will download the model locally. The GUI should look like this:
|
||||

|
||||
##### Model location
|
||||
On Windows, the models are located in ```C:\Users\<username>\. cache\whisper\<model>```
|
||||
|
||||
|
||||
[^1]: Advanced users can use ```pip install ffmpeg-python``` but be ready to deal with some [PATH issues](https://stackoverflow.com/questions/65836756/python-ffmpeg-wont-accept-path-why), which I encountered in Windows 11.
|
||||
|
||||
|
||||
4
run_Mac_1.sh
Normal file
4
run_Mac_1.sh
Normal file
@@ -0,0 +1,4 @@
|
||||
#!/bin/bash
|
||||
echo Starting...
|
||||
conda activate venv
|
||||
python -u GUI.py
|
||||
3
run_Mac_2.command
Normal file
3
run_Mac_2.command
Normal file
@@ -0,0 +1,3 @@
|
||||
#!/bin/bash
|
||||
echo Running Script
|
||||
python -u GUI.py
|
||||
5
run_Windows.bat
Normal file
5
run_Windows.bat
Normal file
@@ -0,0 +1,5 @@
|
||||
@echo off
|
||||
echo Starting...
|
||||
call conda activate venv
|
||||
REM OPTION 2 : (KEEP TEXT WITHIN QUOTES AND CHANGE USERNAME) "C:/Users/user/Anaconda3/condabin/activate.bat"
|
||||
call python GUI.py
|
||||
@@ -1,5 +0,0 @@
|
||||
@echo off
|
||||
call 'PATH_TO_CONDA'
|
||||
call 'ACTIVATE_NEEDED_ENVS'
|
||||
call python GUI.py
|
||||
PAUSE
|
||||
@@ -1,3 +1,5 @@
|
||||
Armstrong_Small_Step
|
||||
In seconds:
|
||||
[0.00 --> 24.00]: That's one small step for man, one giant leap for mankind.
|
||||
[0.00 --> 7.00]: I'm going to step off the limb now.
|
||||
[7.00 --> 18.00]: That's one small step for man.
|
||||
[18.00 --> 24.00]: One giant leap for mankind.
|
||||
@@ -1,3 +1,4 @@
|
||||
Axel_Pettersson_röstinspelning
|
||||
In seconds:
|
||||
[0.00 --> 16.00]: Hej, jag heter Axel Pettersson, jag föddes i Örebro 1976. Jag har varit Wikipedia sen 2008 och jag har översatt röstintroduktionsprojektet till svenska.
|
||||
[0.00 --> 6.14]: Hej, jag heter Axel Pettersson. Jag följer bror 1976.
|
||||
[6.40 --> 15.10]: Jag har varit vikerpedjan sen 2008 och jag har översatt röstintroduktionsprojektet till svenska.
|
||||
@@ -1,7 +1,7 @@
|
||||
import whisper
|
||||
import glob, os
|
||||
|
||||
def transcribe(path, file_type, model=None, language=None, verbose=True):
|
||||
def transcribe(path, file_type, model=None, language=None, verbose=False):
|
||||
'''Implementation of OpenAI's whisper model. Downloads model, transcribes audio files in a folder and returns the text files with transcriptions'''
|
||||
|
||||
try:
|
||||
@@ -11,10 +11,10 @@ def transcribe(path, file_type, model=None, language=None, verbose=True):
|
||||
|
||||
glob_file = glob.glob(path+'/*{}'.format(file_type))
|
||||
|
||||
print('Using {} model, you can change this by specifying model="medium" for example'.format(model))
|
||||
print('Only looking for file type {}, you can change this by specifying file_type="mp3"'.format(file_type))
|
||||
print('Expecting {} language, you can change this by specifying language="English". None will try to auto-detect'.format(language))
|
||||
print('Verbosity is {}. If TRUE it will print out the text as it is transcribed, you can turn this off by setting verbose=False'.format(verbose))
|
||||
print('Using {} model'.format(model))
|
||||
print('File type is {}'.format(file_type))
|
||||
print('Language is being detected automatically for each file')
|
||||
print('Verbosity is set to {}'.format(verbose))
|
||||
print('\nThere are {} {} files in path: {}\n\n'.format(len(glob_file), file_type, path))
|
||||
|
||||
print('Loading model...')
|
||||
@@ -28,7 +28,7 @@ def transcribe(path, file_type, model=None, language=None, verbose=True):
|
||||
result = model.transcribe(
|
||||
file,
|
||||
language=language,
|
||||
verbose=True
|
||||
verbose=verbose
|
||||
)
|
||||
start=[]
|
||||
end=[]
|
||||
|
||||
Reference in New Issue
Block a user