[ Home ]
[ aca / en / f / h3 / i / jp / t / v ] [ dis ] [ Home ] [ FAQ ] [ Rules ] [ Catalog ] [ Archive ] [ RSS ]
Board Statistics
Board PPD Total Posts Unique Posters Last Post
Take it easy!

teeeeeeeeeeeeeeeeee.jpg - 51.08 KB (474x474)

It's Kasane Teto's birthday!!!!!!!!

>>

who cares. worst vocaloid and adopted by newfags. Luka will always be on top

>>

>>12276 Nuh uh angry

>>

>>12277 Yeah huh ( ´ω`)


1586729120494.jpg - 60.02 KB (1200x675)

BRAND NEW social website idea concept. Steal it. DELIBERATELY unlike social medias or imageboards. A brand new person opening the website is greeted on a page filled with a plethora of unknown symbols; colourful logos all of the same size. Sigils if you will. But more like seals, like flags. All same size (if circle, all of them are in circles) No names. No other identifiers. Just the seals. just intuit which means what. pure intuition. The seals dont seem to mean anything. They're just a seemingly gibberish image, ALWAYS. The only other text on the screen is the instruction to pick one carefully because once picked that's the Party you start with. A Party is a group of min 4 up to 6 people. Once in a Party, you find out that you can't speak (type) aside from SINGLE emoji per post. The Party members start to give you Questions with Answers Choice. After some time answering, you find out you've gained some really basic Words. Like pronouns, prepositons, grammar words, adverbs. But of course, not "Life" words (Words other than for grammatical purposes', like objects in real world). The Party, if they're satisfied and not decided to kick you out, let you stay in it. The rest of this website is at this point free-for-all ideas on what next to implement, basically; after this starting point, there are two essential things: a) gaining Words. The website owner decide how this is to be done. b) the chance for a Party to one day meet other Parties that are also using the site. The website owner decides how this is to be done. The whole meaning of this thing is to give meaning to our own Speech on the internet, that is, to be able to even SPEAK at all, one must WORK towards it. The site is, yes, 'gamified', it's deliberately contrary to the whole concept of social websites in that the more you spent energy and time on the site the more that you're able to speak and express yourself. The more you put effort, the more freedom you have. You are fully CENSORED at the very beginning, speaking basic grunts and mereemojis, and only by your own effort you build up your own FREE('d) Speech from absolute nothing. Like I said, the whole concept of the website is deliberately opposite to social media and imageboards where you enter and you can start saying anything you like (100/100). Here, you begin right from zero (0/100). Continuing a bit. From afar you see some people conversing in FULL ON MULTIPLE PARAGRAPHS per post. Now you have something to look forward for. Now you understand what the game of the website is. To be able to fully express yourself. Btw the "random symbol" thing at the very beginning is to facilitate randomness, you don't get to choose what you like, you're forced to be with a group of people that by luck you might not like or like, this is deliberate. If you dislike the Party, you must spend some time until you're able to meet other Parties and join/create other Parties instead. It's the whole concept. Here, everything about you, you start everything from zero.


sanicheadhogri.gif - 33.77 KB (356x328)

Helloo, Hikarins! Y'all are doing good? i hope! WELL, LETS TALK ABOUT SONIC, I NEED KNOW, WHICH YOUR FAV SONIC GAME? Do yall have some fav Sonic character?

>>

>>12256 Woo! I never played Sonic Mania, should i try? Sonic Heroes seems nice, thats was released to ps2 right?

>>
1-01 Sonic X Theme (Gotta Go Fast!).mp3 - 1003.12 KB (250x250)

キタ━━━(゚∀゚)━━━!!

>>
84232.gif - 4790.70 KB (700x528)

Sonic CD, hands down.

>>

I'm going to be honest, I have been getting filtered by the OG Sonic the Hedgehog game pretty hard. It's very challenging. cry

>>

>>12267 You should definitely try it. Also yes it was on the PS2. Also check out the fan remake of Sonic Triple Trouble


Boku_No_Pico_(Peeing_scene).gif - 239.28 KB (430x242)

キタ━━━(゚∀゚)━━━!!

>>

>>12228 Whoag it's raining!!love

>>
hmm.gif - 1995.68 KB (498x498)

In my theory, they are watering the plants!neco

>>

I want to be watered lovedrool

>>

>>12255 C'mere hikarin, I have some gardening to do. snicker


converted_osaka.png - 7.46 KB (164x164)

Since we managed to put Hikari-chan in Japan let's go ahead and leave a mark in unusual places around the world. In the last thread someone suggested putting the Osaka in another location, since the other was was being removed and there weren't enough people restoring it. I found a decently populated zone in Chile with a large blank space that we can claim snicker Blue Marble (for loading .pngs): https://github.com/SwingTheVine/Wplace-BlueMarble Blue Marble Coords: 623 1202 228 944 Map Themes (Use dark theme so you can see where white pixels are): https://greasyfork.org/en/scripts/546642-wplace-map-themes Direct Link to Location: https://wplace.live/?lat=-29.996731850630262&lng=-70.42667025322265&zoom=12.894664790191456

Your fortune: Average Luck

>>

>>11313 The problem was with bots spamming deltarune and pride flags literally everywhere. Its mostly subsided for now. Now I see korean idols everywhere.

>>

>>11334 idk, I just went to five random places on the map and all of them had at least one flag, three of them having trans flags.

>>
image.png - 41.51 KB (900x600)

Can you do One in my Country (Aka Egypt)

>>
792328b1fc89ba551ae83f840a8163d7.jpg - 50.10 KB (736x736)

>>43 Another Egyptian spotted!

>>

>>12257 Egyptbro, give me a location and an image and i'll help


Screen Shot 2026-03-30 at 4.51.32 AM.png - 92.20 KB (1132x839)

Hello, I'm back and I have a major update to my VOCALOID project! I have sucessfully achieved a shape-invariant pitch transposition! Here it is. First the original audio: https://files.catbox.moe/zmt3rr.wav Now my version with WBVPM (pitched down by an octave): https://voca.ro/1mJ5qljrp9hD or https://files.catbox.moe/kho97n.wav And a version using a naive pitch shift: https://files.catbox.moe/xs39bq.wav Notice that my version, while having more noise, sounds more natural and has less phasiness. This is particular noticeable if you play both at very low volume. One sounds much more 'human' than the other. Also note that this an extreme example with an octave shift (or 1200 cents) - in practice, shifts would typically be far less. Also this doesn't implement several other parts of the system (more on that later). I'll explain all of this in a moment, but first, I'd to correct some major biographical errors. Since this is a long post, I've divided it into sections BIOGRAPHICAL CORRECTIONS In the last post, I claimed that VOCALOID1 used Narrow-Band Voice Pulse Modeling while VOCALOID2 and onwards used Wide-Band Voice Pulse Modeling. This was incorrect, and additionally it was the source of most of my confusion surround the paper. What actually happened is that the research technology that would later become VOCALOID1 started out as work to improve the existing Spectral Modeling Synthesis system that had been developed in the early 1990s. This improvement began work in the late 1990s. But importantly, this system evolved and techniques from it were incorporated with techniques from a system that was being developed called a Phase-Locked Vocoder, and this system would be released as VOCALOID1. In the mid-2000s, work began on combining the techniques learned from improving SMS and the PLVC-based system and attempting to combine them with the mucher older and well-known TD-PSOLA system. Importantly, TD-PSOLA (Time-Domain Pitch Synchronous OverLap and Add) was a time-domain system, while SMS was a frequency-domain system (and also TD-PSOLA was pitch synchronous - hence the name, while SMS had a constant hop size). The first technique they developed was Narrow-Band Voice Pulse Modeling, and later Wide-Band Voice Pulse Modeling. Wide-Band Voice Pulse Modeling ended it up being used in VOCALOID2. Now that I understand this, I also understand the major mistake I made when reading the paper: I was reading it from the perspective of an implementer, thinking of the sections as the steps to implementing it instead of as research. I had thought that section 2.2 described the core processing algorithms. When it was actually about SMS, and importantly, about *the improvements they made to SMS*, and not a complete description of SMS, since SMS was already an established technique. Hence my confusion on why some things were seemingly vaguely explained, since *the paper wasn't about them*. At the same time, much of that section is very useful though because importantly, much of that research was also incorporated into the later techniques. RESULTS I have successfully implemented the Wide-Band Voice Pulse Modeling; synthesis; and pitch transposition, time stretching, and timbre scaling algorithms. Additionally, I have also finished implementing the full version of the pitch estimation module, changed the code to work using overlapping windows, implemented the window adaption system, and fixed countless.

>>

Another improvement relating to windows is the window used for the harmonics that are fed into MFPA. Originally, I had used the same Kaiser-Bessel window for both. I later switched to a Blackman-Harris -92dB window, which I had seen mentioned in the paper. This resulted in a significant improvement. Another improvement I tried was adapting the window size to a value relative to the period of the estimated fundamental frequency. I tried doing this - using the same number of periods as are used for the Kaiser-Bessel window used for TWM - and noted a substantial improvement, even more so than the improvement from switching to the Blackman-Harris window in the first place. Indeed, this matches the results contained in the study. In the WBVPM section, they observe a considerable improvement (up to -10dB) when using an adaptive window size when compared to a fixed window for narrow-band analysis. In that same section, they also found 2 to be the ideal number of periods for minimizing noise and also did experiments with a Hann window. Perhaps experimenting with these ideas could lead to improvements, although that section is about getting an accurate spectrum and reconstruction, which may differ somewhat in needs from the needs of MFPA. Another idea could be using a separate function for determining its adaptive number of periods, as opposed to using the same value as for the Kaiser-Bessel window as I am currently doing. Perhaps always using an integer number of periods could be beneficial. Another idea is only using one or two periods, which would provide better time resolution, and could be better suited for wide-band analysis as we are doing. Another potential improvement I have thought of, but not yet tested, is modifying the constant parameters of the Blackman-Harris window in a manner similar to the method Cano 1998 describes for the Kaiser-Bessel window beta (and that I have used for that), where the constant parameters (or in this case, parameters) are modified in accordance with the fundamental frequency. Another potential for improvement of the MFPA results could be the use of a peak selection algorithm. I had previously used a very simple one I had found on another resource by the UPF Music Technology Group. Although this algorithm did not seem to show an improvement. I later removed it and saw no observable detriment. The paper does not provide details on this specifically, but I now understand why, so I should do more research with how this was tackled in SMS. One idea I've though of myself is to calculate the estimated harmonics and then search the surrounding area for peaks. We then select the peak with the minimum error, where that error is determined based on distance and amplitude. One formula I have thought for the error calculation but not tested is amplitude / distance^2. We want to search far enough to always have the best candidate, but not too far is to be computationally inefficient or run into floating-point error and instability in the error function. A potential improvement to this approach is instead of determining the initial estimate for the harmonic frequency by multiplying the fundamental frequency by the harmonic index, we could instead add the fundamental frequency to the peak that was chosen to be the last harmonic. This would account for drift caused by inaccuracies in the f0 estimation and also distortion in the harmonics. However, this also runs the risk of drifting away from the harmonics. A possible solution to this issue could be blending this estimated harmonic frequency with the one obtained by multiplication with the fundamental frequency. This could act as a sort of course correct that would work gradually, but at the same time keep the benefits of basing it on the previous selected harmonic peak.

>>

Another potential improvement could be found by fixing sudden jumps in fundamental frequency that last for only a few analysis frames and then return to roughly the same fundamental frequency as before the jump. Cano 1998 calls for a "hystheresis cycle" - though I am not sure exactly what that means. I have implemented a simple system that discarded large relative jumps that last for only a single frames. However, this has two major issues. The first is that these jumps often last for more than just one frame. The second is that if I legitimate jump in f0 that stays occurs, this introduces one frame of lag. MAXIMALLY FLAT PHASE ALIGNMENT My last post was about MFPA, since then, I have made a number of improvements to this part of the system. I don't believe I have made any changes to the core MFPA function itself, but I have made a lot of improvements to the MFPA refinement algorithm as well as the code surrounding MFPA. One major improvement I made only recently. The previous issue stemmed from what I now believe to have been a misunderstanding. The MFPA algorithm gives a phase shift for each frame. This can be converted into a time offset. However, unless the frame-rate is exactly the same as the fundamental frequency (in the instantaneous sense), this will give more or fewer pulse onsets than actually exist. At the time, I was using a high-pitched sample for testing whose f0 was much faster than the analysis hop-size of 256 samples (or ~172 per second at 44.1kHz). Because of this, there were usually more than one pulse in between each detected pulse onset. At the time, I had thought that getting all the pulse onsets was the purpose of the MFPA refinement algorithm. Which is why I was confused that the it was described as choosing a *subset* of the pulses and not a superset. At the time, I had implemented the MFPA refinement algorithm, but it was buggy and either didn't work or did nothing. Later, I began thinking of ways myself of getting the in between onsets. My ideas was to add increments of the f0 period until the next pulse was reached. I eventually realized that the purpose of MFPA refinement algorithm was not interpolation, but to take a list of pulse onsets that could include multiple close estimates for the same pulse and narrow it down so there is only per pulse and such that the best one is chosen (actually it looks at a few additional candidates, which somewhat tripped me up into thinking it was about interpolation for long). For this to happen, the analysis hop-size needs to be greater than the fundamental frequency (if it was equal, it would likely slowly drift and eventually miss one onset). I realized the issue why the hop size was high (and thus the maximum frequency low) in the paper was that they were using low frequency audio samples in the range of 50-100Hz, while I was using samples around 300Hz. I adjusted to the hop size to 96 and got great results. I think I had also tried this before, but it had not worked, and it couldn't have, because this is only possible without decreasing the size of the analysis window within the overlapping window framework, which I had not implemented yet at the time I first tried. However, this low hop size is relatively computationally expensive, so much so that f0 and MFPA peaks take up most of the execution time. A possible improvement would be to use a lower analysis rate and actually use the interpolation method, but then feed the interpolated pulses into the MFPA refinement algorithm as you would likely get better results that way. I have fixed numerous bugs within the MFPA refinement implementation. A noteworthy one is that previously, I was not considering that the analysis window's time is in the center, and not the start. Because of that, the new onsets are now offset compared to the old ones, but I believe it is now correct.

>>

The pulse onset selection is now quite good: https://files.catbox.moe/ik27fw.png Close up: https://files.catbox.moe/urby2w.png However, there are still deviations. Here is one at around 20k samples in one of my test audio samples: https://files.catbox.moe/76nlgo.png So there is still some work to be done. Another potential improvement could be the introduction of a system for detecting for formants and weighting them less in the MFPA calculation. Recall that phase is roughly constant within a formant, but not between them. START FRAMES In the audio samples I have provided so far, I have cut off the first part of the audio. The issue is with pitch estimation for early frames. Remember that are analysis window is multiple f0 periods in size. Because of this, it can't fit at the start so it has to be decreased to a much smaller size. This is much more of an issue now that I have decreased the hop size substantially. I have now set it to skip the first few frames, because otherwise, the forced extremely small window size causes the whole pitch estimation system to irreversibly destabilize. I've been thinking of solutions to this problem. One solution could be to let the analysis window take on the full size it wants and pad the area before the start with zeros or maybe something else, this could also be used for the end. Possibly the most promising solution I have come up with, although I have not test any of these, is to back fill the previous the pulses with the first good estimated pulse onset minus integer multiples of the first good estimate of the fundamental frequency. This should work assuming both the first pulse and fundamental frequency estimate are good, the fundamental frequency stays relatively constant over the start section, and the start section only contains a few pulses. Luckly the last criteria will always be satisfied as the size of the start section is half the size of the window, and the number of pulses is then (window_size / period) / 2, but the window size in the adaptive framework is just a small number of periods, so we are left with the (mostly) constant adaptive_period_count / 2 as the number of pulses. WIDE-BAND VOICE PULSE MODELING Regarding the patent issue, I have determined that it applies only to the specific technique in Bonada 2008 WBVPM of using periodization to achieve a real-sized discrete fourier transform. However, that section also another option, that being interpolation. I have implementated it and found it to work well. I did a test a found a noise level of about -140dB (for reference, 1ulp for a single-precision float is about -145dB), which is extremely negligible and comparable to the results in the study for the periodization technique. I have also added the ability to use a few extra samples on the side to improve the spline. However, I have not tested the consequences of this variation. I don't know whether the original implementation did something like this.

>>

Text in the patent: "generating for each pulse a sequence of repetitions of said audio pulse, said audio pulse being repeated according to its own characteristic frequency; deriving frequency domain information associated with at least some of the sequences of repetitions of said audio pulses, each said sequences of repetitions of said audio pulse being represented as a vector of sinusoids based on the derived frequency, said vector of sinusoids corresponds to a sinusoidal series expansion of the specific audio pulse;" Bonada 2008, WBVPM, NON-INTEGER SIZE FFT: "PERIODIZATION: one period of the input signal is windowed with wR (n) , and repeated several times at the rate defined by T so that the FFT buffer of length M covers in the end several periods. The repetition implies interpolating both the signal samples and the window function. Then the resulting signal sr (n) is windowed by an analysis window function wA (n) , and the spectrum obtained is actually the convolution of such analysis window response WA (f ) by the spectrum of Sr (f ) sampled at harmonic frequencies" TUNING I have come up with two techniques for tuning that apply in different ways. AUTOMATIC TUNING - The idea is that we use a stochastic statistical algorithm that minimizes a cost function by adjusting a set of parameters (one I looked into that seems promising is global-optimization SPSA). The parameters in this cases would be constant used in C. A python script would replace placeholders with the values being picked by the minimization algorithm and then compile and run the C program. The results would then be compared to a reference by another algorithm/program, which would then be summed together to give a cost value. A program for doing I plan to research is called AudioVMAF. I believe it was originally designed to test audio compression, however I hope that it could also be useful here. MANUAL TUNING - In this method, we insert instrumentation into various intermediate values calculated in the program. Then, for one very small snippet of audio, we use Automatic Tuning to determine ideal values. Then, a programmer tries to write code to make it better match these desired values. Then, if successful, it can be test in general over the whole dataset. If it is not an improvement, then the most negtaively affected audio snippets can be selected and then have a similar process to decrease the change for them while keeping the change for the ones that benefit. Both of these methods would work best for matching with another vocal synthesizer, since the timings and parameters can match exactly. However, they may also be adaptable to optimizing parameters for real-world (and thus also realistic) voices. It would have to work somewhat in reverse though in that someone would sing first and then a note sequence would have to be made that matches it almost exactly. OTHER CONSIDERATIONS There are many more potential tweaks and improvements. I have many dozens accumulated and plenty more to research, test, and implement. One widely applicable variation is using logarithmic based scales. I still don't have an answer to the voiced/unvoiced frame decision issue, but I will look for SMS research about that and older Bonada papers. One heuristic I thought of is noise / amplitude^2 > threshold.

>>

ADDENDUM, because I just realized I forgot a bunch of things I meant to put into this post This is still a simplified model. It does not take into the Excitation plus Resonance model, the Spectral Voice Model. It uses a linear transform and not generated trajectories. One thing I was thinking about was the part in WBVPM section where they said that one of the disadvantages of WBVPM was not being able to separate harmonic and non-harmonic. I also read that the noise is embedded as fluctuations in the spectrum of each voice pulse and over time, which is what I had presumed because the information has to go somewhere. I was thinking, what if you took each harmonic as the values and the pulse onsets times as the positions in a spline. Then interpolated at regular intervals. Then applied the fourier transform. Then separate the highest frequencies and the others. Take the others and apply the inverse Fourier transform, and then rebuild a spline from this and interpolate the values back at the onsets. I wonder if this would work. There would be loss though because of the resampling steps. This could decreased by taking more samples. You could also apply a correction by sampling and sampling it back to calculate the resampling loss itself without the removal of the high frequency modulations, and then add this difference back to the main pulse information after the separation.


Dunhuang_Yiqiejing_yinyi.svg.png - 2971.66 KB (1920x1120)

Discuss cool chinese stuff here I'll start: >The Yongle Encyclopedia is a Chinese leishu encyclopedia commissioned by the Yongle Emperor (r. 1402–1424) of the Ming dynasty in 1403 and completed by 1408. It comprised 22,937 manuscript rolls in 11,095 volumes. Fewer than 400 volumes survive today, comprising about 800 rolls, or 3.5% of the original work. >Most of the text was lost during the latter half of the 19th century, in the midst of events including the Second Opium War and the Boxer Rebellion. Its sheer scope and size made it the world's largest general encyclopedia, until it was surpassed by Wikipedia in late 2007, nearly six centuries later imagine how cool it would've been if it survived cry >i wanna learn https://archive.org/details/chineseenglishbilingualvisualdictionary_201909

>>
>>
Chaozhou_Opera-Menglikung.jpg - 64.30 KB

>>11672 Hikarin, have you been playing Civilization VII? Do you know if there are any good mods for Civilization VI? I wonder if it's received updates happy I learned about the Chaoshan region of China, it had been the origin of a lot of derivatives of it's culture, as it had been home to a lot of migration overseas. It had a lot of impact on Singaporeans, along with Indonesian Chinese, who were often the majority landowners, strangely often. Quite a few people there speak a variety of Mandarin that is the closest to ancient Chinese, and they're often referred to as the Teochew people. They also have a coming of age ceremony. Interesting people. https://www.omniglot.com/chinese/ A useful page for people trying to learning languages, a lot of documentation I'd recommend the movies, „An Elephant Standing Still” and „To Live” on https://kisskh.ws/ which I found a lot of East Asian shows in Also try out https://lightnovelpub.org/ for quite a few webnovels, albeit in English, Qidian would be better for Chinese https://www.konglongmandarin.com/ Another interesting website I came across to learn Mandarin from Peppa Pig

>>
雁塔_乐游原·青龙寺_11.jpg - 7989.06 KB (5152x3864)

「 衆鳥高飛盡 孤雲獨去閒 相看兩不厭 只有敬亭山 」 「 The birds have vanished down the sky. Now the last cloud drains away. We sit together, the mountain and me, until only the mountain remains. 」 A poem about the Jing-Ting mountain, related to Zazen in Buddhism, which is related to the meditation. It's interesting how Japan diverged from Chinese Buddhism with schools of esoteric Buddhism with skull rituals. https://en.wikipedia.org/wiki/Tachikawa-ryu Chinese esoteric Buddhism never really flourished because of the persecution that arrived shortly after it began to be taught, and the name for it is literally related to the Tang dynasty - 唐密 It seems to be undergoing a small revival, with the designation of these temples by esoteric monks as important sites, because of their cultural influence outside China

>>
Laozi_002.jpg - 62.44 KB (600x450)

Been a while since I visited this site hello Hikarins happy! 搭子文化 is an interesting sociocultural phenomenon within China today where people make friends based on their social status in life https://en.wikipedia.org/wiki/Dazi_culture https://www.wenlinshe.com/tw/ this is a very cool library with a lot of classics and idioms that could be useful http://www.robos.org/sections/chinese/cangjie.html this provides information on cangjie which is a useful system to type Chinese and makes use of the radicals, with keys mapped to them 「 夫禍富之 轉而相生 其變難見也 」 「 Disasters and abundance take places with one another It is difficult to make certain 」 These are two lines from the 淮南子 about the old man near the border who had lost his horse. It is a pretty famous parable, and parables, addages, or idioms in Chinese are often referred to as 成語 https://sites.google.com/site/wenzhoudialect/anthology/baidu-wenzhou-dialect this regards "Wenzhounese" a division of Wu Chinese, which is also close to Shanghainese. Wenzhounese is famous for it's uniqueness, with around eleven tones, with phrases describing it discussing it's remoteness "天不怕,地不怕,就怕温州人说温州话" "Fear not the heavens, nor the earth, but the Wenzhou man speaking Wenzhounese. It is regarded as one of the devil dialects. https://github.com/ZWolken/Great-Dictionary-of-Modern-Chinese-Dialects/blob/main/%E5%B9%BF%E5%B7%9E%E6%96%B9%E8%A8%80%E8%AF%8D%E5%85%B8.pdf this is a document regarding a 2002 compilation of the fourty two modern Chinese dialects 躺平 is a term used to mean "lie flat" and it is mostly a word used by a lot of NEETs in China, kukuku is an example of an old Chinese imageboard, although their culture is very isolated https://english.shanghai.gov.cn/en-LearnChinese/index.html a useful site from the Shanghai government I also learned about tone sandhi, where the tones change regarding the context and the previous tone that precedes a phrase, it is pretty strange and hard to figure out https://opentext.ku.edu/tingyiting/chapter/lesson22/ 邯郸学步 is a unique phrase that I relate to, it warns one to not copy another person blindly, and regards the Handan walk, which was sort of an old trend in ancient China https://www.straightdope.com/21343499/is-the-chinese-word-for-crisis-a-combination-of-danger-and-opportunity there is a lot of discussion on whether the Chinese word for disaster is a combination of danger and opportunity There is also an obscure practice where Chinese characters are used and picked as a form of Astrology, there are few pages on it https://www.stronghold-nation.com/history/myth/literomancy There is a lot of use of seals or chops within businesses in China, and it's often discussed in the legal context, it is sort of like a more official signature compared to ordinary 签字/ https://harris-sliwoski.com/chinalawblog/is-that-a-real-chinese-company-chop-stamp-seal/

>>

>>12247 very kool resources chikarin, thank yew


image.png - 1584.87 KB (987x1262)

So glad I setup a private szurubooru instance. Now I can finally organize all my images and access them from anywhere happy Feel free to use this thread to share cool images

>>
banana.png - 318.51 KB (853x637)

>>12240 here is a cool image

>>
image.png - 1782.72 KB (1867x1080)
>>

>>12245 when sharing links, you should delete everything after the ?is= it's all tracking info

>>

>>12246 OH FOR FUCK SAKE!

>>
1000021660.jpg - 45.19 KB (474x693)

here's a cool image shades


1000022060.jpg - 429.24 KB (636x900)

HIKARIN! You dumb, smelly, roody-poo candy ass NEET! Take a freaking sbower already! I can smell you from 3 threads over! angryeww

>>

Shut the fuck up mokou, you know damn well you've been wearing those same clothes for at least 500 years

>>

>>12237 mokou musk... drool drool drool


1767550348918-0.jpg - 252.26 KB (2048x1536)

What's up my fellow hikarin?

>>

>>12222 Which animu do you happen to be watching?

>>

>>12223 catching up on my seasonals, so Frieren, Sentenced to be a Hero, Champignon Witch, and Kunon The Sorcerer Can't See or whatever that one is called. the last two are kinda mid but they're nice to pass the time lol. if you couldn't tell i like fantasy anime... snicker

>>

What's up? The sky dummy

>>

Well I'm watching Ranma ½ And I've Enjoyed currently on Episode 4 though which I have finished and I'm currently seeing the outro

>>

>>12233 I need to pick that back up. I'm on season 3 of the 90s version and I like it so far. One of the few classics I'm watching in the dub and I love the Canadian accents


Delete post: [ File only ]