Compare commits
12 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| b765ff6bc6 | |||
| 867b082589 | |||
| b4017c6fee | |||
| 1ea5187e78 | |||
| 0051ceb873 | |||
| 76be00552f | |||
| a5dd5d4a03 | |||
| 43bcffaf4c | |||
| 4e1c709f43 | |||
| dfe967bd58 | |||
| 586289efe5 | |||
| c5a5597eee |
@@ -2,4 +2,4 @@
|
||||
Unfortunately, I have not found a permament solution for this, not being a Mac user has limited the ways I can test this. For now, these are the recommended steps for a beginner user:
|
||||
1. Open a terminal and navigate to the root folder (transcribe-main if you downloaded the folder). You can also right-click (or equivalent) on the root folder to open a Terminal within the folder.
|
||||
2. Run the following command:
|
||||
```python GUI.py```
|
||||
python GUI.py
|
||||
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 135 KiB |
@@ -1,42 +1,71 @@
|
||||
## transcribe
|
||||
Simple script that uses OpenAI's Whisper to transcribe audio files from your local folders.
|
||||
## Local Transcribe
|
||||
|
||||
Local Transcribe uses OpenAI's Whisper to transcribe audio files from your local folders, creating text files on disk.
|
||||
|
||||
## Note
|
||||
|
||||
This implementation and guide is mostly made for researchers not familiar with programming that want a way to transcribe their files locally, without internet connection, usually required within ethical data practices and frameworks. Two examples are shown, a normal workflow with internet connection. And one in which the model is loaded first, via openai-whisper, and then the transcription can be done without being connected to the internet. There is now also a GUI implementation, read below for more information.
|
||||
|
||||
### Instructions
|
||||
|
||||
#### Requirements
|
||||
1. This script was made and tested in an Anaconda environment with python 3.10. I recommend this method if you're not familiar with python.
|
||||
|
||||
1. This script was made and tested in an Anaconda environment with Python 3.10. I recommend this method if you're not familiar with Python.
|
||||
See [here](https://docs.anaconda.com/anaconda/install/index.html) for instructions. You might need administrator rights.
|
||||
|
||||
2. Whisper requires some additional libraries. The [setup](https://github.com/openai/whisper#setup) page states: "The codebase also depends on a few Python packages, most notably HuggingFace Transformers for their fast tokenizer implementation and ffmpeg-python for reading audio files."
|
||||
Users might not need to specifically install Transfomers. However, a conda installation might be needed for ffmepg[^1], which takes care of setting up PATH variables. From the anaconda prompt, type or copy the following:
|
||||
Users might not need to specifically install Transfomers. However, a conda installation might be needed for ffmpeg[^1], which takes care of setting up PATH variables. From the anaconda prompt, type or copy the following:
|
||||
|
||||
```
|
||||
conda install -c conda-forge ffmpeg-python
|
||||
```
|
||||
|
||||
3. The main functionality comes from openai-whisper. See their [page](https://github.com/openai/whisper) for details. As of 2023-03-22 you can install via:
|
||||
|
||||
```
|
||||
pip install -U openai-whisper
|
||||
```
|
||||
4. There is an option to run a batch file, which launches a GUI built on TKinter and TTKthemes. If using these options, make sure they are installed in your python build. You can install them via pip.
|
||||
|
||||
4. There is an option to run a batch file, which launches a GUI built on TKinter and TTKthemes. If using these options, make sure they are installed in your Python build. You can install them via pip.
|
||||
|
||||
```
|
||||
pip install tk
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
```
|
||||
pip install ttkthemes
|
||||
```
|
||||
|
||||
#### Using the script
|
||||
This is a simple script with no installation. You can either clone the repository with
|
||||
|
||||
This is a simple script with no installation. You can download the zip folder and extract it to your preferred working folder.
|
||||
|
||||

|
||||
|
||||
Or by cloning the repository with:
|
||||
|
||||
```
|
||||
git clone https://github.com/soderstromkr/transcribe.git
|
||||
```
|
||||
and use the example.ipynb template to use the script.
|
||||
**OR** download the ```transcribe.py``` file into your work folder. Then you can either import it to another script or notebook for use. I recommend jupyter notebook for new users, see the example below. (Remember to have transcribe.py and example.ipynb in the same working folder).
|
||||
#### Example with jupyter notebook
|
||||
See [example](example.ipynb) for an implementation on jupyter notebook, also added an example for a simple [workaround](example_no_internet.ipynb) to transcribe while offline.
|
||||
|
||||
|
||||
#### Example with Jupyter Notebook
|
||||
|
||||
See [example](example.ipynb) for an implementation on Jupyter Notebook, also added an example for a simple [workaround](example_no_internet.ipynb) to transcribe while offline.
|
||||
|
||||
#### Using the GUI
|
||||
You can also run the GUI version from your terminal running ```python GUI.py``` or with the batch file called run_gui.bat, just make sure to add your conda path to it. If you want to download a model first, and then go offline for transcription, I recommend running the model with the default sample folder, which will download the model locally. The GUI should look like this:
|
||||
|
||||
You can also run the GUI version from your terminal running ```python GUI.py``` or with the batch file called run_Windows.bat (for Windows users), just make sure to add your conda path to it. If you want to download a model first, and then go offline for transcription, I recommend running the model with the default sample folder, which will download the model locally.
|
||||
|
||||
The GUI should look like this:
|
||||
|
||||

|
||||
|
||||
or this, on a Mac, by running `python GUI.py` or `python3 GUI.py`:
|
||||
|
||||

|
||||
|
||||
[^1]: Advanced users can use ```pip install ffmpeg-python``` but be ready to deal with some [PATH issues](https://stackoverflow.com/questions/65836756/python-ffmpeg-wont-accept-path-why), which I encountered in Windows 11.
|
||||
|
||||
|
||||
BIN
Binary file not shown.
|
After Width: | Height: | Size: 324 KiB |
+1
-1
@@ -1,5 +1,5 @@
|
||||
@echo off
|
||||
echo Starting...
|
||||
call conda activate venv
|
||||
call conda activate base
|
||||
REM OPTION 2 : (KEEP TEXT WITHIN QUOTES AND CHANGE USERNAME) "C:/Users/user/Anaconda3/condabin/activate.bat"
|
||||
call python GUI.py
|
||||
+10
-3
@@ -1,5 +1,7 @@
|
||||
import whisper
|
||||
import glob, os
|
||||
#import torch #uncomment if using torch with cuda, below too
|
||||
import datetime
|
||||
|
||||
def transcribe(path, file_type, model=None, language=None, verbose=False):
|
||||
'''Implementation of OpenAI's whisper model. Downloads model, transcribes audio files in a folder and returns the text files with transcriptions'''
|
||||
@@ -11,6 +13,11 @@ def transcribe(path, file_type, model=None, language=None, verbose=False):
|
||||
|
||||
glob_file = glob.glob(path+'/*{}'.format(file_type))
|
||||
|
||||
#if torch.cuda.is_available():
|
||||
# generator = torch.Generator('cuda').manual_seed(42)
|
||||
#else:
|
||||
# generator = torch.Generator().manual_seed(42)
|
||||
|
||||
print('Using {} model'.format(model))
|
||||
print('File type is {}'.format(file_type))
|
||||
print('Language is being detected automatically for each file')
|
||||
@@ -34,15 +41,15 @@ def transcribe(path, file_type, model=None, language=None, verbose=False):
|
||||
end=[]
|
||||
text=[]
|
||||
for i in range(len(result['segments'])):
|
||||
start.append(result['segments'][i]['start'])
|
||||
end.append(result['segments'][i]['end'])
|
||||
start.append(str(datetime.timedelta(seconds=(result['segments'][i]['start']))))
|
||||
end.append(str(datetime.timedelta(seconds=(result['segments'][i]['end']))))
|
||||
text.append(result['segments'][i]['text'])
|
||||
|
||||
with open("{}/transcriptions/{}.txt".format(path,title), 'w', encoding='utf-8') as file:
|
||||
file.write(title)
|
||||
file.write('\nIn seconds:')
|
||||
for i in range(len(result['segments'])):
|
||||
file.writelines('\n[{:.2f} --> {:.2f}]:{}'.format(start[i], end[i], text[i]))
|
||||
file.writelines('\n[{} --> {}]:{}'.format(start[i], end[i], text[i]))
|
||||
|
||||
print('\nFinished file number {}.\n\n\n'.format(idx+1))
|
||||
|
||||
|
||||
Reference in New Issue
Block a user