[Proposal: MB14] Moonbeam NFT Tracker

Sorry for the late reply.
I also fully support this proposal in my position as treasury councillor.

With the signalled support from 3 of the 5 treasury councillors feel free to progress the proposal to the next stage.

I will also act as the assigned treasury council member for this proposal so in case you have any question or need guidance - feel free to always reach out :slight_smile:

3 Likes

Link to the on-chain proposal:

3 Likes

Wow that was quick Turrizt. Thank you!

2 Likes

The sad truth is… @turrizt was an eearly AI project as PoC for ChatGPT pre-releases

5 Likes

I created a TreasuryCouncilMotion for the TreasuryProposal which is currently up for voting for the councillors:

Proposal: 0xe0a538818892a049744de355ee661d7b81a82b0f7ea0ac1d7c96dd0d0365f1a3
Index: 12
1 Like

The motion has passed and will be funded in the next spend period (~ 5 days)

2 Likes

Amazing! Thank you for your support.

2 Likes

Congrats @dotdatamaxi ! Really happy for you. Please continue to support all the NFTs projects here :slight_smile:

3 Likes

Thank you! I will definitely continue the support for all mentioned projects and will add more soon.

3 Likes

Heya, I just joined the forum and was browsing through the latest posts and saw this. I know we just spoke on telegram the other day after I saw your tweet. I think this is a great initiative and as a user and NFT project and marketplace developer, I support this initiative. Obviously I am not on a council, so just expressing my personal support here. But I do have some technical comments to make :wink:

While I think it’s great that you use the tools that you are most comfortable with and this allowed you for a quick turnaround, and it’s great that it works. I think a much more web3 and on-chain ways to subscribe to NFT changes can be explored in place of scraping. The scraping have number of issues in my opinions

  • The websites that rely on indexers heavily might have out of sync state, so scraper would also get out of sync data
  • The latency and caching will always be dependant on how the website that is scraped is handling it, they might have server side caching enabled, plus how performant is their connection between their website and their indexer, and finally their indexer might be slow due to processing a lot of uneccessary data or just doing a lot of async operations, and finally the indexer sdk itself might handle unfinalised blocks and re-orgs in such a way that they might be behind the latest block for 30 - 200 blocks (Seen this myself on some of the well known indexers in Polkadot)
  • By scraping these websites you can incur additional costs for them or make the experience of their users worse if you accidently did too many requests too often or their servers don’t expect this extra pressure

Some things I would propose to try, depending on the type of data you are after:

  • Subsquid’s arrowsquid has a great example of simple erc721 indexer, ERC-721 is one of the simplest things to index, as it’s mostly just a single even “Transfer” which does everything, the only additional thing you might have to do is get it’s metadata for name, image, description and attributes, but again, the same subsquid’s example has example of using Multicall v3. Here’s an example that I am talking about: GitHub - subsquid-labs/evm-multicall-example
  • Subquery recently released their Universal NFT API Unified NFT API (Beta) I don’t think they have websocket subscription available, but you can just a cron job. You can filter by chainId there to get all moonbeam nfts for example, and limit results + order by block so the cron gets just the latest ones.

These 2 solutions would get you just Mints (if from is AddressZero, Transfers, Burns (if to is AddressZero). For sales it might be trickier. Most marketplaces use their own trading implementation, but I suspect (although I haven’t tried myself), that you should be able to look for erc20 transfers in logs immediatly after or before Transfer event (when neither to or from is AddressZero) and detect the Sales this way, this way you can get the price and who purchased it to.

If you need to track Marketplace Listings it might get trickier, some marketplace contracts (the one responsible for trading) have them verified (like Moonbeans), others are not verified (like ours at RMRK/SIngular), so their ABI is not available or code is obfuscated (Bytecode in block explorer), but since there are handful of marketplaces that you support that don’t have ABI publicly available in moonscan, you can just look at block explorer Events tab to work out which event is the Listing, copy it’s topic hash and just check against it on the indexer, you should be able to detect topic pretty easily IMO, and then you’ll just have to work out what each encoded parameter value means (if the contract is not verified). While annoying, I don’t think there are that many marketplace contracts that are unverified, and with a bit of fiddling you can work out the events. While this solution to Listing detection is a bit hacky and not the cleanest, it’s used by other projects, like evrloot’s discord bot worked out how to detect our Listings without our help: https://github.com/theshadesofsummer/evrloot-listings/blob/main/src/publish-listing.js

A lot of indexers have free plans / options. Subsquid is free for now, and if you self-host it, it might be free forever, and you can always use public RPC there too. But in the long run, reading from chain will be much more robust and also more-real time solution than scraping and as a bonus, you will gain some good web3 experience :slight_smile:

5 Likes

Hey Yuri,

Thank you for getting in touch about the proposal and for taking the time to assist me in improving the project. The way I implemented it initially was based on my familiarity with web 2.0 techniques, which seemed like the most straightforward approach.

Currently, the project already includes all the necessary data about a project, such as metadata and images. So reading transactions on chain will work. I agree that using scraping is not the best and most elegant method, as it could ultimately lead to a worse experience for users on marketplaces.

After reading your suggestions, I believe that implementing your solutions will make the project more reliable, faster, and less demanding on resources for marketplaces.

The more web3 experience I get, the better. Before adding RMRK projects, reading on chain data is the first thing on the list that needs attention.

I really appreciatie you reaching out to me :heart:

3 Likes

Yeah no worries, I think it’s a good practice in any case, and you can do it in stages, starting with just filtering by a single NFT collection address and perhaps later once you have a list of topics to listen to in indexer you can switch to “*” selector (in case of subsquid example), this way you don’t even need to maintain a list if NFT projects and just capture them all in one go (if you wish to ofcourse). Let me know if you need any help or have some questions, I think this can be an interesting use case.

5 Likes

Great to hear that it passed. Just wanted to touch on why I felt a promoted post would help as it would get it ‘outside’ of the echo chamber we are sometimes in and would reach more eyes organically and then that could lead to ecosystem growth.

2 Likes

Thank you so much @yurinondual , for sharing your knowledge and expertise :pray:

Love it that you took the time to stop by :rocket:

3 Likes

Understood! I appreciate your suggestion of using promoted posts to reach a wider audience and stimulate ecosystem growth. Considering the rewards the account is currently generating by staking half of the received funds, this option seems worth exploring further.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.

@dotdatamaxi hey sir could you provide a update about this? Thanks

Sure!

Currently 33 people get updated through Telegram here: Telegram: Contact @glmrnfttracker I hope more people will engage soon.

Active ERC721 projects on Moonbeam have their dedicated channel where users get their updates. Scrapers run every hour, when changes happen users will get informed.

What I see is that buying/selling activity on projects is very low due to the sentiment out there.

Though when big price drops happen on certain projects, oftentimes those get bought up very quickly.

As it stands, Twitter/X is not implemented yet and I don’t think current posts are suitable for the platform as posts aren’t served linearly. I have to find a way to serve posts fitting for the platform. When I see more activity and the bull is back on, I will enable Twitter Blue/X Premium. I want to get the most out of the of the membership.