这是indexloc提供的服务,不要输入任何密码
Skip to content

Conversation

@ved1beta
Copy link
Contributor

@ved1beta ved1beta commented Apr 24, 2025

IGNORE THE DOCS FOLDER , main.ipynb has everything , rest ipynb files aree visualization , and py file are same as main just divided . Let me know if i could add numbers from here in the docs (already done it ) let me know so i could push the changes

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@BenjaminBossan
Copy link
Member

Thanks for the PR @ved1beta. Unfortunately, there is an issue with rendering many of the notebooks in the browser, including main.ipynb. Could you please check if this can be fixed?

From what I can tell from the notebooks that do render and from the scripts, this contains information on memory requirements, inference speed etc. I think this is useful information, but I would like to generate and present the data in a different fashion, as right now it is very static and we don't really want to have one script per method.

Instead, what I envision is that we have a very similar style as the existing MetaMath suite where we have a single script and multiple "experiment" files that define how the PEFT methods are configured. Then we can run the script with an experiment and it will write all the results to a json file, same as is done right now. Finally, we can extend the app.py to have a second tab (or so) showing the visualization that is currently in the notebooks.

This has the advantage that it's very easy to add an "experiment" for a new method. All the results will be in simple json files that can be easily tracked in git (ipynb is always hard to review). Users can easily load these files and run their own data analysis if they're interested, which is also not easily done with notebooks. This approach also takes care of tracking a bunch of meta info, making the results easier to audit and reproduce. And finally, we will deploy the gradio app automatically, so that users can always inspect the most recent results on HF Spaces.

To make these changes, I think we can take a lot of the code from the existing MetaMathQA files and from the scripts that you added in this PR. I know that this is a big change and would require extra work, I hope you're interested in working on this. In general, it is a good idea to explain your idea beforehand in an issue so that we can discuss these design decisions up front.

@ved1beta
Copy link
Contributor Author

i dont know what is the issue with ipynb files , here is the google collab link for it link

@BenjaminBossan
Copy link
Member

i dont know what is the issue with ipynb files , here is the google collab link for it link

Thanks for the link. I see that there is some duplicate code like the definition of the Bone config and layer. That should not be necessary. Also there is some code for measuring training speed, but that's also already covered by the method comparison suite.

Otherwise, my previous comment still stands.

@ved1beta
Copy link
Contributor Author

ved1beta commented Apr 29, 2025

i have messed up this branch closing this PR , will raise a different one containing all the changes , similar directory to MetaMath suite with examples and single script to runn everything : ) then will deal with the docs 🫡

@ved1beta ved1beta mentioned this pull request Apr 30, 2025
@ved1beta ved1beta closed this Apr 30, 2025
@ved1beta ved1beta deleted the benchmark_scripts_2 branch April 30, 2025 09:17
cyyever pushed a commit to cyyever/peft that referenced this pull request Sep 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants