Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

First-order approximation #77

Open
AceChuse opened this issue Feb 18, 2020 · 11 comments
Open

First-order approximation #77

AceChuse opened this issue Feb 18, 2020 · 11 comments

Comments

@AceChuse
Copy link

Thanks for your works! It is really great idea. There are one thing I want to ask. In paper, you mentioned the first-order approximation of MAML. I have seen details about it or the name of algorithm. Can you talk more details about it?

@AceChuse
Copy link
Author

In addtion, did you try to use a large scale neural network training by a first-order approximation. Can it get a better result on dataset like miniImageNet?

@Runist
Copy link

Runist commented Jul 24, 2020

In addtion, did you try to use a large scale neural network training by a first-order approximation. Can it get a better result on dataset like miniImageNet?

I think you can read "How to train your MAML".This paper asks five questions about MAML.

@AceChuse
Copy link
Author

In addtion, did you try to use a large scale neural network training by a first-order approximation. Can it get a better result on dataset like miniImageNet?

I think you can read "How to train your MAML".This paper asks five questions about MAML.

Thank you for your response! I have read "How to train your MAML". However, what I want to know is the math process of the first-order approximation. Would you tell more detail about it?

@Runist
Copy link

Runist commented Jul 24, 2020

In addtion, did you try to use a large scale neural network training by a first-order approximation. Can it get a better result on dataset like miniImageNet?

I think you can read "How to train your MAML".This paper asks five questions about MAML.

Thank you for your response! I have read "How to train your MAML". However, what I want to know is the math process of the first-order approximation. Would you tell more detail about it?

Sorry about that.I just write a code and training it.I'm not good at math, so I can't tell you.

@AceChuse
Copy link
Author

In addtion, did you try to use a large scale neural network training by a first-order approximation. Can it get a better result on dataset like miniImageNet?

I think you can read "How to train your MAML".This paper asks five questions about MAML.

Thank you for your response! I have read "How to train your MAML". However, what I want to know is the math process of the first-order approximation. Would you tell more detail about it?

Sorry about that.I just write a code and training it.I'm not good at math, so I can't tell you.

Can I see the code of this process and how to use? I ask for the math process because I cannot find the code. So if I know how you write the code, I can write the math process.

@Runist
Copy link

Runist commented Jul 24, 2020

In addtion, did you try to use a large scale neural network training by a first-order approximation. Can it get a better result on dataset like miniImageNet?

I think you can read "How to train your MAML".This paper asks five questions about MAML.

Thank you for your response! I have read "How to train your MAML". However, what I want to know is the math process of the first-order approximation. Would you tell more detail about it?

Sorry about that.I just write a code and training it.I'm not good at math, so I can't tell you.

Can I see the code of this process and how to use? I ask for the math process because I cannot find the code. So if I know how you write the code, I can write the math process.

可以,老哥还是说汉语吧。顶不住了,但其实我写的代码没有用到二阶导,其实我也对这一段挺好奇的。code

@AceChuse
Copy link
Author

好吧,所以我才希望看到她本人来回答一下。

@Runist
Copy link

Runist commented Jul 24, 2020

好吧,所以我才希望看到她本人来回答一下。

根据李宏毅教授的讲解,他是把二阶导近似等效为0或1。但是这样对结果的精度不好。

@AceChuse
Copy link
Author

好吧,所以我才希望看到她本人来回答一下。

根据李宏毅教授的讲解,他是把二阶导近似等效为0或1。但是这样对结果的精度不好。

这个有讲义资料吗?求地址啊

@Runist
Copy link

Runist commented Jul 24, 2020

好吧,所以我才希望看到她本人来回答一下。

根据李宏毅教授的讲解,他是把二阶导近似等效为0或1。但是这样对结果的精度不好。

这个有讲义资料吗?求地址啊

B站上搜“李宏毅”第一个2020的你拉到下面有个Meta-Learning章节的,就是了

@AceChuse
Copy link
Author

我看到了,十分感谢!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants